![]() method, computer readable means to execute the method and system to apply inidcative attributes of a
专利摘要:
"expression of visual representation based on the expression of the player" the present invention refers to the techniques of facial recognition and gesture recognition / body posture, in which a system can naturally transmit the emotions and attitudes of a user through the visual representation of the user . the techniques can comprise customizing a visual representation of a user based on the detectable characteristics, deducing a user's temperament from the detectable characteristics and applying attributes indicative of the temperament to the visual representation in real time. the techniques can also include processing changes in user characteristics in the physical space and updating the visual representation in real time. for example, the system can track the user's facial expressions and body movements to identify a temperament and then apply attributes indicative of that temperament to the visual representation .; thus, a visual representation of a user, such as an avatar or imaginary character, can reflect the user's expressions and moods in time 公开号:BR112012000391B1 申请号:R112012000391 申请日:2010-07-06 公开日:2019-12-24 发明作者:Kipman Alex;Wilson Andrew;Stone Perez Kathryn;D Burton Nicholas 申请人:Microsoft Corp;Microsoft Technology Licensing Llc; IPC主号:
专利说明:
Invention Patent Descriptive Report for “METHOD, MEASUREABLE BY COMPUTER TO IMPLEMENT THE METHOD AND SYSTEM TO APPLY INIDCATIVE ATTRIBUTES OF A USER TEMPERAMENT TO A VISUAL REPRESENTATION” Background of the Invention [0001] Often, several applications will display a visual representation that corresponds to a user that the user controls through certain actions, such as selecting buttons on a remote or mobile controller in a specific way. The visual representation can be in the form of an avatar, an imaginary character, a cartoon or animal image, a cursor, a hand, or similar. Visual representation is a computer representation that corresponds to a user who typically adopts the form of a two-dimensional (2D) or three-dimensional (3D) model in various applications, such as computer games, video games, chats, forums, communities, instant messaging services, and the like. Many computing applications, such as computer games, multimedia applications, office applications, or the like, provide a selection of predefined animated characters that can be selected for use in the application as the user's avatar. Some systems may incorporate a camera that has the ability to take a picture of a user and identify the attributes from this data frame. However, these systems require a user attribute capture, image processing and then application to the character in a non-real time environment, and the attributes applied are low fidelity, usually based on a single snapshot of the user. Summary of the Invention [0002] It may be desirable to customize a visual representation of a user based on the detected characteristics of the user and Petition 870190075890, of 07/08/2019, p. 4/79 2/69 it may be desirable to apply the characteristics to the visual representation in real time. Also, it may be desirable that the system processes change to the characteristics of the user in the physical space and can update the visual representation in real time. Of these characteristics, it may be desirable for the system to identify a user's temperament and apply attributes indicative of the temperament to the visual representation of the user. [0003] Techniques for providing a visual representation of a user, such as an avatar or imaginary character, which can reflect the user's temperament in real time, are described in this document. Using facial recognition and gesture / body recognition techniques, the system can deduce a user's temperament. The system can naturally transmit a user's emotions and attitudes through the application of attributes of the user's temperament to the visual representation of the user. Also described are the techniques for tracking the user in the physical space over time and applying modifications or updates to the visual representation in real time. For example, the system can track a user's facial expressions and body movements to identify a temperament and then apply attributes indicative of that temperament to the visual representation. The system can use any detectable characteristics to assess the user's temperament for application to visual representation. [0004] This Summary is provided to introduce a selection of concepts in a simplified way that will be further described below in the Detailed Description. This Summary is not intended to identify key resources or essential resources of the subject matter claimed, nor is it intended to be used to limit the scope of the subject matter claimed. In addition, the issue in question claimed is not limited to implementations that solve some Petition 870190075890, of 07/08/2019, p. 5/79 3/69 or all of the disadvantages noted elsewhere in this description. Brief Description of the Drawings [0005] Computer-readable systems, methods and means for modifying a visual representation in accordance with this specification are further described with reference to the attached drawings, in which: [0006] Figure 1 illustrates an exemplary modality of a target recognition, analysis and tracking system with a user playing a game. [0007] Figure 2 illustrates an exemplary modality of a capture device that can be used in a target recognition, analysis and tracking system and that incorporates chaining and animation combination techniques. [0008] Figure 3 illustrates an exemplary modality of a computational environment in which the animation techniques described in this document can be incorporated. [0009] Figure 4 illustrates another example of a computational environment in which the animation techniques described in this document can be incorporated. [00010] Figure 5A illustrates a user's skeleton mapping that was generated from a depth image. [00011] Figure 5B illustrates additional details of the gesture recognition architecture shown in Figure 2. [00012] Figure 6 shows an exemplificative target recognition, analysis and tracking system and an exemplary modality of a user in the physical space and a display of the user's visual representation. [00013] Figure 7 shows an exemplificative flowchart for a method of applying attributes indicative of a user's temperament in a visual representation. Petition 870190075890, of 07/08/2019, p. 6/79 4/69 [00014] Figure 8 shows an example lookup table to deduce a user's temperament. [00015] Figure 9 shows another exemplary target recognition, analysis and tracking system and exemplary modalities of the user in the physical space and exemplary modalities of displaying the user's visual representation. Detailed Description of the Illustrative Modalities [00016] Techniques for providing a visual representation of a user, such as an avatar, which may reflect the user's temperament are described in this document. The visual representation of the user can be in the form of a character, an animation, an avatar, a cursor on the screen, a hand, or any other virtual representation that corresponds to the user in the physical space. Using facial recognition and gesture / body posture recognition techniques, a system can naturally convey a user's emotions and attitudes through the user's visual representation. For example, a capture device can identify the attributes of a user and customize the visual representation of the user based on these identified features, such as emotions, expressions and moods. In an exemplary mode, the system generates and uses aspects of a person's skeleton or mesh model based on image data captured by the capture device, and uses body recognition techniques to determine the user's temperament. [00017] Techniques for displaying visual representation in real time and applying attributes indicative of a user's temperament to visual representation in real time are also described. The system can track the user in the physical space over time and apply modifications or updates to the visual representation in real time. The system can track detectable characteristics, such as Petition 870190075890, of 07/08/2019, p. 7/79 5/69 features, user gestures, application status, etc., to deduce a user's temperament. User characteristics, such as, for example, facial expressions and body movements, can be used to deduce a temperament, and then the attributes of this temperament can be applied to the visual representation, so that the visual representation reflects the temperament of the user. user. For example, the capture device can identify a user's behaviors and mannerisms, emotions, speech patterns, historical data, or the like to determine the user's temperament and apply this to the user's visual representation. The system can use any detectable attributes to assess the user's temperament for application in the visual representation. [00018] To generate a representative model of a target or object in a physical space, a capture device can capture an image of the depth of the scene and scan targets or objects in the scene. A target can be a human target, such as a user, in physical space. Thus, as used in this document, it is understood that the target and the user can be used interchangeably. In one embodiment, the capture device can determine whether one or more targets or objects in the scene correspond to a human target, such as the user. To determine whether a target or object in the scene corresponds to a human target, each target can be filled by flood and compared to a pattern of a human body model. Each target or object that corresponds to the human body model can then be scanned to generate a skeleton model associated with it. For example, a target identified as a human can be scanned to generate a skeleton model associated with it. The skeleton model, then, can be provided to the computational environment to track the skeleton model and provide a representation Petition 870190075890, of 07/08/2019, p. 8/79 6/69 visual representation associated with the skeleton model. The computing environment can determine what controls to perform on an application that runs in the computer environment based, for example, on user gestures that have been recognized and mapped in the skeleton model. In this way, user feedback can be displayed, such as through an avatar on a screen, and the user can control this movement of the avatar by making gestures in the physical space. [00019] The movement of the visual representation can be controlled by mapping the movement of the visual representation in the movement of the user in the physical space. For example, the target may be a human user who is moving or gesturing in physical space. The visual representation of the target can be an avatar displayed on a screen, and the movement of the avatar can match the movement of the user. The movement in the physical space can be converted into a control in a system or application space, such as a virtual space and / or a games space. For example, a user's movements can be tracked, modeled and displayed, and the user's gestures can control certain aspects of an operating system or execution application. User gestures can be converted to a control in the system or application space to apply attributes indicative of a temperament in a visual representation. [00020] The captured movement can be any movement in the physical space that is captured by the capture device, such as, a camera. The captured movement can include the movement of a target in physical space, such as, a user or an object. The captured movement can include a gesture that becomes a control in an operating system or application. The movement can be dynamic, such as a running movement, or the movement can be static, such as, a user who is posed with little movement. Petition 870190075890, of 07/08/2019, p. 9/79 7/69 [00021] The system, methods and components of facial and body recognition to convey the user's attitudes and movements described in this document can be incorporated in a multimedia console, such as a game console, or any other video device. computing in which you want to display a visual representation of a target, which includes, by way of example and without any intended limitation, satellite receivers, signal decoders, arcade games, personal computers (PCs), portable phones, personal digital assistants ( PDAs), and other handheld devices. [00022] Figure 1 illustrates an exemplary modality of a configuration of a target recognition, analysis and tracking system 10 that can employ techniques to apply user characteristics to a visual representation. In the example mode, a user 18 is playing a boxing game. In an exemplary modality, system 10 can recognize, analyze, and / or track a human target, such as user 18. System 10 can gather information related to movements, facial expressions, body language, user emotions, etc. , in the physical space. For example, the system can identify and scan the human target 18. System 10 can use body posture recognition techniques to identify the temperament of the human target 18. For example, if user 18 is hunched, fold his hands over his chest, and moves his head to the side with lethargic movement, the system 10 can identify the user's body parts 18 and how they move. System 10 can compare movements to a library of emotions, moods, attitudes, expressions, etc., to interpret the user's temperament. [00023] As shown in Figure 1, the target recognition, analysis and tracking system 10 can include a computational environment 12. Computational environment 12 can be a computer, Petition 870190075890, of 07/08/2019, p. 10/79 8/69 a game system or console, or similar. According to an exemplary embodiment, the computing environment 12 can include hardware components and / or software components, so that the computing environment 12 can be used to run applications, such as game applications, non-game applications, or similar. [00024] As shown in Figure 1, the target recognition, analysis and tracking system 10 can additionally include a capture device 20. The capture device 20 can be, for example, a camera that can be used to visually monitor one or more more users, such as user 18, so that the gestures performed by one or more users can be captured, analyzed and tracked to perform one or more controls or actions within an application, as will be described in more detail below. [00025] According to one modality, the target recognition, analysis and tracking system 10 can be connected to an audiovisual device 16, such as a television, a monitor, a high definition television (HDTV), or similar, which can provide game, visual and / or audio applications to a user, such as user 18. For example, computing environment 12 may include a video adapter, such as a graphics card and / or an audio adapter , such as a sound card that can provide audiovisual signals associated with the game application, non-game application, or similar. The audiovisual device 16 can receive the audiovisual signals from the computational environment 12 and, then, it can send the game, visual and / or audio applications with the audiovisual signals to the user 18. According to one modality, the audiovisual device 16 can be connected to the computing environment 12, for example, via an S-Video cable, a coaxial cable, an HDMI cable, a DVI cable, a VGA cable, or the like. Petition 870190075890, of 07/08/2019, p. 11/79 9/69 [00026] As shown in Figure 1, the target recognition, analysis and tracking system 10 can be used to recognize, analyze and / or track a human target, such as user 18. For example, user 18 can be tracked using the capture device 20, so that the movements of the user 18 can be interpreted as controls that can be used to affect the application that is run by the computer environment 12. Thus, according to one modality, the User 18 can move his body to control the application. System 10 can track the user's body and the movements made by the user's body, which include the gestures that control aspects of the system, such as, application, operating system, or similar. The system can compare the user's body posture, facial expressions, expressions and vocal tone, targeted contemplations, etc., to determine a user's temperament or attitude and apply characteristics of this temperament or attitude to the avatar. [00027] System 10 can convert an input on a capture device 20 into an animation, the input being representative of a user's movement, so that the animation is triggered by this input. In this way, the user's movements can map into a visual representation 40, so that the user's movements in the physical space are performed by avatar 40. The user's movements can be gestures that are applicable to a control in an application. As shown in Figure 1, in an exemplary mode, the application that runs in the computational environment 12 may be a boxing game that user 18 may be playing. [00028] Computational environment 12 can use audiovisual device 16 to provide a visual representation of a player avatar 40 that user 18 can control with his movements. For example, user 18 can throw a punch at the physical space to make player avatar 40 throw a punch at the Petition 870190075890, of 07/08/2019, p. 12/79 10/69 game. Player avatar 40 may have the characteristics of the user identified by the capture device 20, or system 10 may use the attributes of a well-known boxer or portray the physique of a professional boxer for the visual representation that maps the user's movements. System 10 can track the user and modify the characteristics of the user's avatar based on the user's detectable attributes in the physical space. The computational environment 12 can also use the audiovisual device 16 to provide a visual representation of an opponent boxer 38 to the user 18. According to an exemplary modality, the computer environment 12 and the capture device 20 of the recognition, analysis and target tracking 10 can be used to recognize and analyze the punch of user 18 in the physical space, so that the punch can be interpreted as a game control of player avatar 40 in the game space. Multiple users can interact with each other from remote locations. For example, the visual representation of the opposing boxer 38 may be representative of another user, such as a second user in the physical space with user 18 or a network user in a second physical space. [00029] Other movements by the user 18 can also be interpreted as other controls or actions, such as controls for bob, weave, shuffle, block, jab, or throw a variety of punches of different intensity. In addition, some movements can be interpreted as controls that can correspond to actions other than the control of the player avatar 40. For example, the player can use movements to end, pause or save a game, select a level, view high scores, if communicate with a friend, etc. In addition, a complete range of user movement 18 can be available, used and analyzed in any suitable way to interact with an application. Petition 870190075890, of 07/08/2019, p. 13/79 11/69 [00030] In the exemplary modalities, the human target, such as, user 18 can have an object. In such modalities, the user of an electronic game may be holding the object, so that the movements of the player and the object can be used to adjust and / or control game parameters. For example, the movement of a player holding a racket can be tracked and used to control a racket on the screen in an electronic sports game. In another example, the movement of a player holding an object can be tracked and used to control a weapon on the screen in an electronic combat game. [00031] The user's gestures or movements can be interpreted as controls that can correspond to actions different from the control of the player avatar 40. For example, the player can use movements to end, pause or save a game, select a level, view high scores, communicating with a friend, etc. The player can use movements to apply attributes indicative of a temperament to the visual representation of the user. In a virtual way, any controllable aspect of an operating system and / or application can be controlled by movements of the target, such as the user 18. According to other exemplary modalities, the target recognition, analysis and tracking system 10 can interpret target movements to control aspects of an operating system and / or application that are outside the scope of games. [00032] An application of a user's attribute to a visual representation or the detection of certain emotions or attitudes of the user can be an aspect of the operation system and / or application that can be controlled or recognized from the user's gestures. For example, a gesture for the user's hands folded over his chest can be a gesture recognized as a mood of frustration. The system's recognition of a gesture that indicates that the user is Petition 870190075890, of 07/08/2019, p. 14/79 12/69 frustrated, along with a user expression, such as a frown, can result in a visual representation that reflects a frustrated temperament. [00033] User gestures can be controls applicable to an operating system, non-game aspects or a non-game application. User gestures can be interpreted as object manipulation, such as controlling a user interface. For example, considering a user interface that has blades or a compensated interface vertically aligned from left to right, where selecting each blade or tab opens the options for various controls within the application or system. The system can identify the user's hand gesture for the movement of a tab, where the user's hand in the physical space is virtually aligned with a tab in the application space. The gesture, which includes a pause, a grabbing motion and then a flick of the hand to the left, can be interpreted as selecting a tab and then moving the tab out of the way to open the next tab. . [00034] Figure 2 illustrates an exemplary embodiment of a capture device 20 that can be used for target recognition, analysis and tracking, where the target can be a user or an object. According to an exemplary embodiment, the capture device 20 can be configured to capture video with depth information that includes a depth image that can include depth values using any technique that includes, for example, flight time, structured light , stereo image, or similar. According to one embodiment, the capture device 20 can organize the calculated depth information into Z layers or layers that can be perpendicular to a geometric axis Z extending from the depth camera along its line of sight. Petition 870190075890, of 07/08/2019, p. 15/79 13/69 [00035] As shown in Figure 2, the capture device 20 can include an image camera component 22. According to an exemplary embodiment, the image camera component 22 can be a depth camera that can capture the depth image of a scene. The depth image can include a two-dimensional (2-D) pixel area of the captured scene where each pixel in the 2-D pixel area can represent a depth value, such as a length or distance, for example, in centimeters, millimeters, or similar, of an object in the scene captured from the camera. [00036] As shown in Figure 2, according to an exemplary embodiment, the imaging camera component 22 can include an IR light component 24, a three-dimensional (3-D) camera 26, and an RGB camera 28 that can be used to capture the depth image of a scene. For example, in flight time analysis, the IR light component 24 of the capture device 20 can emit infrared light over the scene and then can use sensors (not shown) to detect the retrograde diffusion light from the surface of one or more targets and objects in the scene using, for example, the 3-D 26 camera and / or the RGB 28 camera. In some embodiments, the pulsed infrared light can be used, so that the time between a pulse of output light and a corresponding input light pulse can be measured and used to determine a physical distance from the capture device 20 to a particular location on the targets or objects in the scene. In addition, in other exemplary embodiments, the phase of the outgoing light wave can be compared to the phase of the incoming light wave to determine a phase shift. The phase shift, then, can be used to determine a physical distance from the capture device 20 to a particular location on targets or objects. Petition 870190075890, of 07/08/2019, p. 16/79 14/69 [00037] According to another exemplary modality, flight time analysis can be used to indirectly determine a physical distance from the capture device 20 to a particular location on targets or objects when analyzing the intensity of the beam. light reflected over time through various techniques that include, for example, forming a pulse image of obturated light. [00038] In another exemplary embodiment, the capture device 20 can use a structured light to capture depth information. In such an analysis, standardized light (that is, light displayed as a known pattern, such as a grid pattern or a strip pattern) can be projected onto the scene, for example, through the IR 24 light component. the hit to the surface of one or more targets or objects in the scene, the pattern may become deformed in response. Such deformation of the pattern can be captured, for example, by camera3-D 26 and / or RGB camera 28 and then can be analyzed to determine a physical distance from the capture device 20 to a particular location on targets or objects. [00039] According to another embodiment, the capture device 20 can include two or more physically separate cameras that can view a scene from different angles, to obtain stereo visual data that can be resolved to generate depth information. [00040] Capture device 20 may additionally include a microphone 30 or a set of microphones. The microphone 30 may include a transducer or sensor that can receive and convert the sound into an electrical signal. According to one embodiment, the microphone 30 can be used to reduce feedback between the capture device 20 and the computational environment 12 in the target recognition, analysis and tracking system 10. In addition, the microphone 30 can be Petition 870190075890, of 07/08/2019, p. 17/79 15/69 used to receive audio signals that can also be provided by the user to control applications, such as game applications, non-game applications, or similar, that can be run by the computing environment 12. [00041] In an exemplary embodiment, the capture device 20 can additionally include a processor 32 which can be in operative communication with the image camera component 22. Processor 32 can include a standardized processor, a specialized processor, a microprocessor, or similar that can execute instructions that may include instructions to receive a depth image, determine whether a suitable target can be included in the depth image, convert the appropriate target into a target skeleton representation or model, or any other suitable instruction. [00042] Capture device 20 may additionally include a memory component 34 that can store instructions that can be executed by processor 32, images or image frames captured by 3-d 26 camera or RGB 28 camera, or any other information , suitable images, or the like. According to an exemplary embodiment, the memory component 34 may include random access memory (RAM), read-only memory (ROM), cache, Flash memory, a hard disk, or any other suitable storage component. As shown in Figure 2, in one embodiment, the memory component 34 can be a separate component in communication with the image capture component 22 and the processor 32. According to another embodiment, the memory component 34 can be integrated into the processor 32 and / or image capture component 22. [00043] As shown in Figure 2, the capture device 20 can be in communication with the computing environment 12 through Petition 870190075890, of 07/08/2019, p. 18/79 16/69 of a communication link 36. Communication link 36 can be a wired connection that includes, for example, a USB connection, a Firewire connection, an Ethernet cable connection, or similar and / or a wireless connection , such as an 802.1 lb, g, a, or n wireless connection. According to one embodiment, the computational environment 12 can provide a clock for the capture device 20 which can be used to determine when to capture, for example, a scene via the communication link 36. [00044] In addition, the capture device 20 can provide the depth information and images captured, for example, by the 3-D camera 26 and / or RGB camera 28, and a skeleton model that can be generated by the capture device captures 20 to computational environment 12 through communication link 36. Computational environment 12, then, can use the skeleton model, depth information, and captured images, for example, to control an application, such as a game or word processor. For example, as shown in Figure 2, computational environment 12 can include a gesture library 190. [00045] As shown in Figure 2, the computational environment 12 may include a gesture library 190 and a gesture recognition mechanism 192. The gesture recognition mechanism 192 may include a collection of gesture filters 191. A filter may comprise code and associated data that can recognize gestures or otherwise process depth, RGB or skeleton data. Each filter 191 can comprise information that defines a gesture along with parameters or metadata for that gesture. For example, a pitch, which comprises the movement of one hand from the back of the body beyond the front of the body, can be implemented as a 191 gesture filter that comprises information that represents the movement of one hand from user to Petition 870190075890, of 07/08/2019, p. 19/79 17/69 from the back of the body beyond the front of the body, as this movement can be captured by a depth camera. The parameters can then be adjusted for this gesture. Where the gesture is a pitch, a parameter can be a limit speed that the hand needs to reach, a distance that the hand must travel (absolute or relative to the size of the user as a whole), and a confidence rating through the recognition that the gesture occurred. These parameters for the gesture can vary between applications, between the contexts of a single application or within the context of an application over time. [00046] Although it is contemplated that the gesture recognition mechanism may include a collection of gesture filters, where a filter may comprise code or otherwise represent a component for processing depth, RGB, or skeleton data, the use of a filter is not intended to limit analysis to a filter. The filter is a representation of an exemplary component or section of code that analyzes data from a scene received by a system, and compares this data with the basic information that represents a gesture. As a result of the analysis, the system can produce a corresponding output if the input data corresponds to the gesture. The basic information that represents the gesture can be adjusted to correspond to the recurring attribute in the data history representative of the user's capture movement. The basic information, for example, can be part of a gesture filter, as described above. However, any suitable way to analyze the input data and management data is contemplated. [00047] A gesture can be recognized as a gesture of identity of temperament. In an exemplary modality, the movement in the physical space can be representative of a gesture recognized as a request to apply attributes of a temperament Petition 870190075890, of 07/08/2019, p. 20/79 18/69 particular to the visual representation of a target. A plurality of gestures can represent a particular temperament identity gesture. In this way, a user can control the form of visual representation by making a gesture in the physical space that is recognized as a temperament identity gesture. For example, as described above, the user's movement can be compared to a gesture filter, such as gesture filter 191 from Figure 2. The gesture filter 191 can comprise information for a temperament identity gesture from of temperament identity gestures 196 in gesture library 190. [00048] A plurality of temperament identity gestures can represent a temperament that has attributes to be applied to a visual representation on the screen. For example, an excited identification gesture can be recognized from the identity of a user's movement that comprises an up and down jump movement with the user's arms raised in the air. The result can be the application of attributes, directly mapped in the user's movement and / or animations in addition to the user's movement in the user's visual representation. [00049] The data captured by cameras 26, 28 and device 20 in the form of the skeleton model and movements associated with it can be compared to gesture filters 191 in the gesture library 190 to identify when a user (as represented by the model skeleton) performed one or more gestures. In this way, entries in a filter, such as filter 191, may comprise items, such as joint data about a user's joint position, such as the angles formed by the bones in the joint, RGB color data from the scene, and the rate of change of an aspect of the user. As mentioned, the parameters can be adjusted for the gesture. The outputs of a 191 filter may comprise items, such as Petition 870190075890, of 07/08/2019, p. 21/79 19/69 such as, the confidence that a certain gesture is performed, the speed at which a gesture movement is performed and the time at which the gesture occurs. [00050] Computational environment 12 may include a processor 195 that can process the depth image to determine which targets are in a scene, such as, a user 18 or an object in the environment. This can be done, for example, by grouping pixels of the depth image that shares a similar distance value. The image can also be analyzed to produce a skeleton representation of the user, where attributes such as joints and fabrics that pass between the joints are identified. Skeleton mapping techniques exist to capture a person with a depth camera and from there determine various points on this user's skeleton, joints of the hands, wrists, elbows, knees, nose, ankles, shoulders, and where the pelvis meets the spine. Other techniques include transforming the image into a representation of the person's body model and transforming the image into a representation of the person's mesh model. [00051] In one embodiment, processing is performed on the capture device 20 itself, and the raw image data values of depth and color (where the capture device 20 comprises a 3D camera 26) are transmitted to the computational environment 12 through link 36. In another modality, the processing is performed by a processor 32 coupled to camera 402 and, then, the analyzed image data is sent to the computational environment 12. In yet another modality, both raw image data and the analyzed image data is sent to the computational environment 12. The computational environment 12 can receive the analyzed image data, however, it can still receive the raw data to execute the current process or application. For example, if an image Petition 870190075890, of 07/08/2019, p. 22/79 20/69 of the scene is transmitted over a computer network to another user, computational environment 12 can transmit raw data for processing by another computational environment. [00052] Computational environment 12 can use gesture library 190 to interpret the movements of the skeleton model and control an application based on the movements. Computational environment 12 can model and display a representation of a user, such as in the form of an avatar or a pointer on a screen, such as on a display device 193. Display device 193 may include a display monitor computer, a television screen, or any suitable display device. For example, a camera-controlled computer system can capture user image data and display user feedback on a television screen that maps the user's gestures. User feedback can be displayed as an avatar on a screen, such as, as shown in Figures 1A and 1 Β. The movement of the avatar can be directly controlled by mapping the movement of the avatar to those movements of the user. User gestures can be interpreted by controlling certain aspects of the application. [00053] As described above, it may be desirable to apply attributes of a temperament to a visual representation of the target. For example, a user may wish to perform the visual representation of the user with a dance on the screen that indicates the user's happiness. The user can initiate the application of such attributes by performing a particular temperament identity gesture. [00054] According to an exemplary embodiment, the target can be a human target in any position, such as, standing or sitting, a human target with an object, two or more human targets, one or more appendages of one or more human or similar targets that can be scanned, tracked, modeled and / or evaluated to generate a screen Petition 870190075890, of 07/08/2019, p. 23/79 21/69 virtual, compare the user with one or more stored profiles and / or store profile information 198 about the target in a computational environment, such as the computational environment 12. Profile information 198 can be in the form of profiles user profiles, personal profiles, application profiles, system profiles, or any other suitable method for storing data for later access. Profile information 198 can be accessible through an application or be widely available on the system, for example. Profile information 198 can include lookup tables to load specific user profile information. The virtual screen can interact with an application that can be executed by the computational environment 12 described above in relation to Figures 1A-1B. [00055] According to the exemplary modalities, the search tables can include specific user profile information. In one embodiment, the computational environment, such as computational environment 12, can include profile data stored 198 about one or more users in the lookup tables. The stored profile data 198 may include, among other things, the scanned or estimated body sizes, skeleton models, body models, voice samples or passwords, the target ages, previous gestures, target limitations and standard use through target of the system, such as, for example, a tendency to sit, left or right, or a tendency to stay too close to the capture device. This information can be used to determine whether there is a match between a target in a capture scene and one or more user profiles 198 that, in one mode, can allow the system to adapt the virtual tele to the user, or adapt other elements of the computing or gaming experience according to profile 198. [00056] One or more personal profiles 198 can be stored in computer environment 12 and used in numerous user sessions, Petition 870190075890, of 07/08/2019, p. 24/79 22/69 or one or more personal profiles can be created for a single session only. Users can have the option of establishing a profile where they can provide information to the system, such as, a voice or body scan, age, personal preferences, right or left direction, an avatar, a name, or similar. Personal profiles can also be provided for guests who do not provide any information to the system other than scheduling in the capture space. A temporary personal profile can be established for one or more guests. At the end of a guest session, the personal guest profile can be stored or deleted. [00057] The gesture library 190, the gesture recognition mechanism 192, and the profile 198 can be implemented in hardware, software or a combination of both. For example, gesture library 190 and gesture recognition mechanism 192 can be implemented as software that runs on a processor, such as processor 195 in computing environment 12 (or in processing unit 101 in Figure 3 or unit number 259 of Figure 4). [00058] It is emphasized that the block diagrams shown in Figures 2 and Figures 3-4 described below are exemplary and are not intended to imply a specific implementation. Thus, processor 195 or 32 in Figure 1, processing unit 101 in Figure 3 and processing unit 259 in Figure 4, can be implemented as a single processor or multiple processors. Multiple processors can be distributed or centrally located. For example, gesture library 190 can be implemented as software that runs on processor 32 of the capture device or can be implemented as software that runs on processor 195 in computational environment 12. Any combination of processes Petition 870190075890, of 07/08/2019, p. 25/79 23/69 users who are suitable to perform the techniques described in this document are contemplated. Multiple processors can communicate wirelessly, over hard wire or a combination of these. [00059] Furthermore, as used in this document, a computing environment 12 can refer to a single computing device or a computing system. The computing environment can include non-computing components. The computing environment can include a display device, such as the display device 193 shown in Figure 2. A display device can be completely separate, but coupled to the computing environment, or the display device can be the computing device that processes and displays, for example. In this way, a computing system, computing device, computing environment, computer, processor, or other computing component can be used interchangeably. [00060] The library of gestures and filter parameters can be adjusted for an application or context of an application through a management tool. A context can be a cultural context, and this can be an environmental context. A cultural context refers to the culture of a user using a system. Different cultures can use similar gestures to give markedly different meanings. For example, an American user who wants to tell another user to look or use their eyes can place their index finger on their head close to the distal side of their eye. However, for an Italian user, this gesture can be interpreted as a reference to the mafia. [00061] Similarly, different contexts can exist between different environments of a single application. A shooting game is adopted Petition 870190075890, of 07/08/2019, p. 26/79 24/69 first user pain that involves operating a motor vehicle. While the user is on foot, pointing with the fingers towards the ground and extending the hand in front of and away from the body can represent a punch gesture. While the user is in the driving context, this same movement can represent a gear change gesture. In relation to changes in visual representation, different gestures can trigger different changes that depend on the environment. A different modification trigger gesture can be used to enter an application specific modification mode versus a broad system modification mode. Each modification mode can be packaged with an independent set of gestures corresponding to the modification mode, launched as a result of the modification trigger gesture. For example, in a bowling game, a swinging arm movement can be a gesture identified as swinging a ball to clear a virtual bowling alley. However, in another application, the swinging arm movement can be a gesture identified as a request to stretch the user's avatar arm displayed on the screen, there may also be one or more menu environments, where the user can save his game, select between your character equipment or perform similar actions that do not include direct play. In this environment, this same gesture can be a third sense, such as selecting, such as, selecting something or advancing to another screen. [00062] Gestures can be grouped into gender packages of complementary gestures that are likely to be used by an application in that gender. Complementary gestures - complementary like those that are used with each other, or complementary as a change in a parameter of one will change a parameter of another - can be grouped into gender packages. These packages can be provided for an application, which you can select from at Petition 870190075890, of 07/08/2019, p. 27/79 25/69 minus one. The application can adjust or modify a gesture parameter or gesture filter 191 to better suit the unique aspects of the application. When this parameter is adjusted, a second complementary parameter (in the interdependent sense) of the gesture or a second gesture is also adjusted, so that the parameters remain complementary. Genre packages for video games can include genres such as shooting, action, driving and first-user sports. [00063] Figure 3 illustrates an exemplary modality of a computational environment that can be used to interpret one or more gestures in the target recognition, analysis and tracking system. The computing environment, such as the computing environment 12 described above in relation to Figures 1A-2, can be a multimedia console 100, such as a game console. As shown in Figure 3, multimedia console 100 has a central processing unit (CPU) 101 that has a level 1 cache 102, a level 2 cache 104, and a flash ROM (Read Only Memory) 106. The cache level 1 102 and a level 2 cache 104 temporarily store data and therefore reduce the number of memory access cycles, thereby improving processing speed and throughput. CPU 101 can be provided having more than one core, and thus additional level 1 and level 2 caches 102 and 104. Flash ROM 106 can store executable code that is loaded during an initial phase of an initialization process when multimedia console 100 is ON. [00064] A graphics processing unit (GPU) 108 and a video encoder / video codec (encoder / decoder) 114 form a video processing chain for high-speed, high-resolution graphics processing. Data is trans Petition 870190075890, of 07/08/2019, p. 28/79 26/69 ported from the graphics processing unit 108 to the video encoder / video codec 114 via a bus. The video processing thread sends data to an A / V (audio / video) port 140 for transmission on a television or other screen. A memory controller 110 is connected to GPU 108 to facilitate processor access to various types of memory 112, but is not limited to RAM (Random Access Memory). [00065] Multimedia console 100 includes an I / O controller 120, a system management controller 122, an audio processing unit 123, a network interface controller 124, a first USB host controller 126, a second controller USB host 128 and a front panel I / O sub-assembly 130 that are preferably implemented in a module 118. USB controllers 126 and 128 serve as hosts for peripheral controllers 142 (1) 142 (2), a wireless adapter 148, and an external memory device 146 (for example, flash memory, external CD / DVD ROM drive, removable media, etc.). Network interface 124 and / or wireless adapter 148 provides access to a network (for example, the Internet, home network, etc.) and can be any of a wide variety of different wired and wireless adapter components that include an Ethernet card, a modem, a Bluetooth module, a cable modem, and the like. [00066] System memory 143 is provided to store application data that is loaded during the boot process. A media drive 144 is provided and may comprise a DVD / CD drive, hard drive, or other removable media drive, etc. Media unit 144 can be internal or external to multimedia console 100. Application data can be accessed via media unit 144 for execution, playback, etc. through Petition 870190075890, of 07/08/2019, p. 29/79 27/69 of multimedia console 100. Media unit 144 is connected to I / O controller 120 via a bus, such as a Serial ATA bus or other high-speed connection (for example, IEEE 1394). [00067] System management controller 122 provides a variety of service functions related to ensuring availability of multimedia console 100. The audio processing unit 123 and an audio codec 132 form a corresponding audio processing thread with processing high fidelity and stereo. The audio data is transported between the audio processing unit 123 and the audio codec 132 via a communication link. The audio processing thread outputs data to the A / V 140 port for playback through an external audio player or device that has audio capabilities. [00068] The front panel I / O sub-assembly 130 supports the functionality of the power button 150 and the eject button 152, as well as any LEDs (light-emitting diodes) or other indicators exposed on the outer surface of the console multimedia 100. A system power supply module 136 provides power to the components of the multimedia console 100. A fan 138 cools the circuitry within the multimedia console 100. [00069] CPU 101, GPU 108, memory controller 110, and several other components within multimedia console 100 are interconnected via one or more buses that include serial and parallel buses, a memory bus, a peripheral bus and a processor or local bus that uses any of a variety of bus architectures. For example, such architectures may include a Peripheral Component Interconnector (PCI) bus, PCI-Express bus, etc. Petition 870190075890, of 07/08/2019, p. 30/79 28/69 [00070] When multimedia console 100 is turned ON, application data can be loaded from system memory 143 into memory 112 and / or caches 102, 104 and run on CPU 101. The application can have an interface graphical user interface that provides a coherent user experience when browsing different types of media on the multimedia console 100. In operation, applications and / or other media contained in media unit 144 can be launched or run from media unit 144 to provide additional functionality to the multimedia console 100. [00071] The multimedia console 100 can be operated as a standalone system simply by connecting the system to a television or other screen. In this standalone mode, multimedia console 100 allows one or more users to interact with the system, watch movies or listen to music. However, with the integration of the broadband connectivity available through network interface 124 or wireless adapter 148, multimedia console 100 can be additionally operated as a participant in a larger network community. [00072] When multimedia console 100 is ON, an adjusted amount of hardware resources is reserved for system use by the multimedia console operation system. These resources can include a memory reserve (for example, 16MB), CPU and GPU cycles (for example, 5%), network bandwidth (for example, 8 kbs.), Etc. Because these resources are reserved at system startup time, reserved resources do not exist from the application display. [00073] In particular, the memory reserve is preferably large enough to contain the simultaneous boot kernel, applications and system drivers. The CPU reserve is preferably constant, so that if the reserved CPU usage is not used by Petition 870190075890, of 07/08/2019, p. 31/79 29/69 system applications, an idle segment will consume any unused cycles. [00074] Regarding the GPU reservation, the light messages generated by the system applications (for example, pop-ups) are displayed when using a GPU interrupt to program the code to provide pop-up in an overlay. The amount of memory required for an overlay depends on the size of the overlay area and the overlay preferably scales with screen resolution. Where a full user interface is used by the simultaneous system application, it is preferable to use a resolution independent of the application resolution. A sealer can be used to adjust this resolution, so that the need to change the frequency and cause a TV resynchronization is eliminated. [00075] After multimedia console 100 boots and system resources are reserved, simultaneous system applications run to provide system functionality. System functionality is encapsulated in a set of system applications that run within the reserved system resources described above. The operating system kernel identifies strings that are system application strings versus game application strings. System applications are preferably programmed to run on CPU 101 at predetermined times and intervals to provide a coherent system resource display for the application. Programming serves to minimize cache disruption for the game application that runs on the console. [00076] When a simultaneous system application requires audio, audio processing is programmed asynchronously in the game application due to time sensitivity. A multimedia console application manager (described below) controls the audio level of a gaming application (for example, mute, attenuated) when Petition 870190075890, of 07/08/2019, p. 32/79 30/69 system applications are active. [00077] Input devices (for example, 142 (1) and 142 (2) controllers) are shared by game applications and system applications. Input devices are not reserved resources, however, they must be switched between the system applications and the gaming application, so that each has a device focus. The application manager preferably controls the switching of the input stream, without the knowledge of the game application and a driver maintains the state information that refers to the focus switching. Cameras 26, 28 and capture device 20 can define additional input devices for console 100. [00078] Figure 4 illustrates another exemplary modality of a computational environment 220 which can be the computational environment 12 shown in Figures 1A-2 used to interpret one or more gestures in a target recognition, analysis and tracking system. The computing system environment 220 is just one example of a suitable computing environment and is not intended to deduce any limitations on the scope of use or functionality of the subject in question presently described. Nor should computing environment 220 be interpreted as having any dependency or requirement that relates to any one or combination of components illustrated in the exemplary operating environment 220. In some embodiments, the various computing elements shown may include the circuitry configured to evidence the specific aspects of this description. For example, the term circuitry used in the description may include specialized hardware components configured to perform functions through firmware or switches. In other exemplary embodiments, the term circuitry may include a general purpose processing unit, memory, Petition 870190075890, of 07/08/2019, p. 33/79 31/69 etc., configured by software instructions that incorporate operable logic to perform functions. In the exemplary modalities where the circuitry includes a combination of hardware and software, an implementer can write source code that incorporates logic and the source code can be compiled into machine-readable code that can be processed by the general purpose processing unit . Since someone skilled in the art can assess that the state of the art has developed to a point where there is little difference between hardware, software, or a hardware / software combination, selecting hardware versus software to perform specific functions is a choice of project left to an implementer. More specifically, someone skilled in the art can assess that a software process can be transformed into an equivalent hardware structure, and a hardware structure can be transformed into an equivalent software process. Thus, the selection of a hardware implementation versus a software implementation is a design choice left to the implementer. [00079] In Figure 4, the computing environment 220 comprises a computer 241, which typically includes a variety of computer-readable media. Computer-readable media can be any available media that can be accessed by computer 241 and includes both volatile and non-volatile media, removable media and non-removable media. System memory 222 includes computer storage media in the form of volatile and / or non-volatile memory, such as read-only memory (ROM) 223 and random access memory (RAM) 260. A basic input / output system 224 (BIOS), which contains the basic routines that help to transfer information between the elements in computer 241, such as, during startup, it is typically stored in ROM 223. RAM 260 contains typical Petition 870190075890, of 07/08/2019, p. 34/79 32/69 data and / or program modules that are immediately accessible and / or presently operated by the processing unit 259. By way of example, and without limitation, Figure 4 illustrates the operating system 225, the application programs 226 , other program modules 227 and program data 228. [00080] Computer 241 may also include other removable / non-removable, volatile / non-volatile computer storage media. For example only, Figure 4 illustrates a hard disk drive 238 that reads and writes to the non-removable, non-volatile magnetic medium, a magnetic disk drive 239 that reads or writes to a removable, non-volatile magnetic disk 254, and an optical disc drive 240 that reads or writes to a removable, non-volatile optical disc 253, such as a CD ROM or other optical medium. Other removable / non-removable, volatile / non-volatile computer storage media that can be used in the exemplary operating environment include, but are not limited to, magnetic tape cassettes, flash memory cards, digital versatile discs, videotape digital, solid-state RAM, solid-state ROM, and the like. Hard disk drive 238 is typically connected to system bus 221 via a non-removable memory interface, such as interface 234, and magnetic disk drive 239 and optical disk drive 240 are typically connected to the system bus. system 221 through a removable memory interface, such as interface 235. [00081] The units and their associated computer storage media discussed above and illustrated in Figure 4, provide storage of computer-readable instructions, data structures, program modules and other data for computer 241. In Figure 4, for example , hard drive 238 is illustrated as the storage operating system 258, application programs 257, other program modules 256 and program data Petition 870190075890, of 07/08/2019, p. 35/79 33/69 255. Note that these components can be the same or different from operating system 225, application programs 226, other program modules 227, and program data 228. To operating system 258, application programs 257, other program modules 256 and program data 255 is provided different numbers to illustrate that, at a minimum, they are different copies. A user can launch commands and information on the computer 241 through input devices, such as a keyboard 251 and pointing device 252, commonly referred to as a mouse, trackball or touch pad. Other input devices (not shown) may include a microphone, joystick, game pad, satellite dish, scanner, or similar. These and other input devices are often connected to the processing unit 259 via a user input interface 236 which is coupled to the system bus, however, they can be connected by other interface and bus structures, such as a port game port or a universal serial bus (USB). Cameras 26, 28 and capture device 20 can define additional input devices for console 100. A monitor 242 or other type of display device is also connected to system bus 221 through an interface, such as a video interface 232. In addition to the monitor, computers can also include other peripheral output devices, such as speakers 244 and printer 243, which can be connected via a peripheral 233 output interface. [00082] Computer 241 can operate in a networked environment that uses logical connections to one or more remote computers, such as a remote computer 246. Remote computer 246 can be a personal computer, a server, a router, a PC network, a point-to-point device or other common network node and typically includes many or all of the elements described above in relation to Petition 870190075890, of 07/08/2019, p. 36/79 34/69 computer 241, although only one memory storage device 247 has been illustrated in Figure 4. The logical connections shown in Figure 2 include a local area network (LAN) 245 and a wide area network (WAN) 249, but it can also include other networks. Such networked environments are common in offices, company-wide computer networks, intranets and the Internet. [00083] When using in a LAN network environment, computer 241 is connected to LAN 245 via a 237 network interface or adapter. When used in a WAN network environment, computer 241 typically includes a modem 250 or other means to establish communications over WAN 249, such as the Internet. Modem 250, which can be internal or external, can be connected to system bus 221 via user input interface 236, or another appropriate mechanism. In a networked environment, the program modules shown in relation to computer 241, or portions thereof, can be stored on the remote memory storage device. By way of example, and without limitation, Figure 4 illustrates remote application programs 248 that are located on memory device 247. It will be assessed that the network connections shown are exemplary and other means to establish a communications link between computers can be used. [00084] The computer-readable storage medium may comprise computer-readable instructions for modifying a visual representation. The instructions may comprise instructions for rendering the visual representation, receiving data from a scene, where the data includes data representing a user's temperament identity gesture in a physical space, and modifying the visual representation based on the user's identity gesture. user's temperament, in which the temperament identity gesture is a gesture that Petition 870190075890, of 07/08/2019, p. 37/79 35/69 maps a control to apply attributes indicative of a temperament to the visual representation of the user. [00085] Figure 5 A shows an exemplary skeleton mapping of a user that can be generated from image data captured by the capture device 20. In this modality, a variety of joints and bones are identified: each hand 502, each forearm 504, each elbow 506, each bicep 508, each shoulder 510, each hip 512, each thigh 514, each knee 516, each front leg 518, each foot 520, head 522, torso 524, upper part 526 and lower 528 of the spine, and the waist 530. Where more points are tracked, additional attributes can be identified, such as the bones and joints of the fingers or toes, or individual attributes of the face, such as the nose and eyes. [00086] When moving his body, a user can create gestures. A gesture comprises a movement or pose through a user that can be captured as image data and analyzed for meaning. A gesture can be dynamic, comprising a movement, such as imitating a ball. A gesture can be a static pose, such as keeping someone's forearms 504 crossed in front of their torso 524. A gesture can also incorporate supports, such as when swinging a fake sword. A gesture may comprise more than one body part, such as clapping 502, or a subtle movement, such as pursing the lips. [00087] A user gesture can be used to enter a general computing context. For example, various 502 hand movements or other body parts can correspond to common system tasks, such as navigating up and down a hierarchical list, opening a file, closing a file, and saving a file. For example, a user can hold their hand with their fingers Petition 870190075890, of 07/08/2019, p. 38/79 36/69 pointing upwards and the palm facing the capture device 20. He can then close his fingers towards the palm to make a warning signal, and this can be a gesture that indicates that the window focused on a window-based computational user interface environment must be closed. Gestures can also be used in a specific video game context, depending on the game. For example, with a steering game, several hand movements 502 and feet 520 can correspond to driving a vehicle in one direction, changing gears, accelerating and braking. In this way, a gesture can indicate a wide variety of movements that map to a displayed user representation, and to a wide variety of applications, such as video games, text editors, word processing, data management, etc. [00088] A user can generate a gesture that corresponds to walking or running, to walking or running in place in the physical space. For example, the user can alternately raise and lower each 512-520 leg to mimic walking without moving. The system can analyze this gesture by analyzing each hip 512 and each thigh 514. A step can be recognized when a hip-thigh angle (as measured in relation to a vertical line, where a standing leg has a hip- 0 of the thigh, leg and a horizontally extended forward has a hip-leg angle 90 °) exceeds a predetermined threshold relative to the other thigh. A walk or run can be recognized after a number of consecutive steps through alternating legs. The time between the two most recent steps can be considered as a period. After a number of periods where this limit angle is not found, the system can determine that the walking or running gesture has stopped. [00089] Given a walking or running gesture, an application can adjust values for parameters associated with this gesture. These Petition 870190075890, of 07/08/2019, p. 39/79 37/69 parameters can include the limit angle above, the number of steps required to initiate a walk or run gesture, numerous periods where no steps occur to end the gesture and a limit period that determines whether the gesture is a walk or a run . A quick period can correspond to a run, as the user will move his legs quickly, and a slower period can correspond to a walk. [00090] A gesture can be associated in principle with a set of standard parameters in which the application can override its own parameters. In this scenario, an application is not forced to provide parameters, but instead it can use a set of standard parameters that allows the gesture to be recognized in the absence of defined application parameters. Gesture-related information can be stored for predefined animation purposes. [00091] There are a variety of exits that can be associated with the gesture. There may be a baseline yes or no if a gesture is taking place. There may also be a level of confidence, which corresponds to the probability that the user's tracked movement matches the gesture. This can be a linear scale that varies over the floating point numbers between 0 and 1, inclusive. Where an application that receives this gesture information cannot accept false positives as input, it can use only those recognized gestures that have a high level of confidence, such as at least 95. Where an application must recognize each instance of the gesture, even when cost of false positives, this can use gestures that have at least a much lower confidence level, such as, the one that is merely greater than 2. The gesture can have an exit for the time between the two most recent steps, and where only a first step has been registered, it can be set to a reserved value, such as -1 (once Petition 870190075890, of 07/08/2019, p. 40/79 38/69 that the time between either step must be positive). The gesture can also have an outlet for the highest thigh angle achieved during the most recent step. [00092] Another exemplary gesture is a heel lift. In this, a user can create a gesture by raising his heels off the ground, however, keeping his fingers planted. [00093] Alternatively, the user can jump in the air where his 520 feet leave the ground entirely. The system can analyze the skeleton for this gesture by analyzing the angle ratio of shoulders 510, hips 512 and knees 516 to see if they are in an alignment position equal to the upright position. Then, these upper column points 526 and lower 528 can be monitored for any upward acceleration. A sufficient combination of acceleration can trigger a jump gesture. A sufficient combination of acceleration with a particular gesture can satisfy the parameters of a transition point. [00094] Given this heel lift jump gesture, an application can adjust values for the parameters associated with this gesture. Parameters can include the acceleration limit above, which determines how fast any combination of the user's shoulders 510, hips 512 and knees 516 should move upwards to trigger the gesture, as well as a maximum alignment angle between shoulders 510, hips 512 and knees 516 where a jump can still be triggered. The exits can comprise a level of confidence, as well as the user's body angle at the time of the jump. [00095] Adjusting the parameters for a gesture based on the particularities of the application that will receive the gesture is important in identifying the gestures accurately. Properly identifying a user's gestures and intention helps a lot in creating a positive user experience. Petition 870190075890, of 07/08/2019, p. 41/79 39/69 [00096] An application can adjust values for the parameters associated with various transition points to identify the points at which to use the predefined animations. The transition points can be defined by several parameters, such as, the identification of a particular gesture, a speed, an angle of a target or object, or a combination of these. If a transition point is defined at least in part by identifying a particular gesture, then properly identifying the gestures helps to increase the level of confidence that the parameters of a transition point have been met. [00097] Another parameter for a gesture can be a distance moved. Where a user gesture controls the actions of a visual representation in a virtual environment, this avatar can be the length of the arm from a ball. If the user wants to interact with the ball and grab it, this may require the user to extend their 502510 arm to full length while making the grab gesture. In this situation, a similar grabbing gesture where the user only partially extends his 502-510 arm may not achieve the result of interacting with the ball. Likewise, a parameter of a transition point can be the identification of the grab gesture, where if the user only partially extends his arm 502-510, this way, it does not reach the result of interaction with the ball, neither does the user's gesture will meet the transition point parameters. [00098] A gesture or a portion of it can have as a parameter a volume of space in which it must occur. This volume of space can typically be expressed in relation to the body where a gesture comprises the body movement. For example, a football throw gesture for a right-handed user can be recognized only in the volume of space no lower than the right shoulder 510a, and on the same side of the head 522 as the throwing arm 502a Petition 870190075890, of 07/08/2019, p. 42/79 40/69 310a. It may not be necessary to define all the limits of a volume, such as, with this throwing gesture, where an external limit away from the body is left undefined, and the volume extends indefinitely outward, or to the edge of the scene being played. monitored. [00099] Figure 5B provides additional details of an exemplary embodiment of the gesture recognition mechanism 192 of Figure 2. As shown, the gesture recognition mechanism 190 may comprise at least one filter 519 for determining a gesture or gestures. A filter 519 comprises information that defines a gesture 526 (hereinafter referred to as a gesture), and can comprise at least one parameter 528, or metadata, for this gesture 526. For example, a pitch, which comprises the movement of one of the hands from the back of the body beyond the front of the body, it can be implemented as a 526 gesture that comprises information that represents the movement of one of the user's hands from the back of the body beyond the front of the body , as this movement can be captured by the depth camera. Parameters 528 can then be adjusted for this gesture 526. Where gesture 526 is a pitch, parameter 528 can be a limit speed that the hand must reach, a distance that the hand must travel (absolute or relative to the size of the user as a whole), and a confidence classification through the recognition mechanism 192 that the gesture 526 occurred. These 528 parameters for the 526 gesture can vary between applications, between the contexts of a single application or in an application context over time. [000100] Filters can be modular or interchangeable. In one embodiment, a filter has numerous inputs, each of these inputs has a type, and numerous outputs, each of these outputs has a type. In this situation, a first filter can be replaced by a second Petition 870190075890, of 07/08/2019, p. 43/79 41/69 filter that has the same number and types of inputs and outputs as the first filter without changing any other aspect of the 190 recognition engine architecture. For example, there may be a first filter for the drive that adopts data as input skeleton and exits, a confidence that the 526 gesture associated with the filter is occurring and a steering angle. Where someone wants to replace this first drive with a second drive filter - perhaps because the second drive filter is more efficient and requires less processing resources - someone can do this simply by replacing the first filter with the second filter since the second filter has those same inputs and outputs - one skeleton data type input and two confidence type and angle type outputs. [000101] A filter does not need to have a parameter 528. For example, a user height filter that returns the user's height may not allow any parameters to be adjusted. An alternative user height filter can have adjustable parameters - such as whether to consider footwear, haircut style, head props and user posture when determining the user's height. [000102] The entries in a filter can comprise items, such as joint data about a user's joint position, such as angles formed by the bones in the joint, RGB color data from the scene, and the rate of changing an aspect of the user. The outputs from a filter can comprise items, such as, the confidence that a certain gesture is being performed, the speed at which the gesture movement is carried out and a time in which a gesture movement is carried out. [000103] A context can be a cultural context, and this can be an environmental context. A cultural context refers to a user's culture Petition 870190075890, of 07/08/2019, p. 44/79 42/69 arary using a system. Different cultures may use similar gestures to confer considerably different meanings. For example, an American user who wants to tell another user to look or use their eyes can place their index finger on their head close to the distal side of their eye. However, for an Italian user, this gesture can be interpreted as a reference to the mafia. [000104] Similarly, different contexts can exist between different environments of a single application. A first-user sniper game that involves operating a motor vehicle is adopted. While the user is on foot, pointing with the fingers towards the ground and extending the hand in front of and away from the body can represent a punch gesture. While the user is in the driving context, this same movement can represent a gear change gesture. There may also be one or more menu environments, where the user can save this game, select between his character equipment or perform similar actions that do not include direct play. In this environment, this same gesture has a third sense, such as selecting garlic or moving to another screen. [000105] The gesture recognition mechanism 190 may have a base recognition mechanism 517 that provides functionality to a gesture filter 519. In one embodiment, the functionality that the recognition mechanism 517 implements includes an input file along the time tracking recognized gestures and other input, an implementation of the Markov Hidden Model (where the modeled system is assumed to be a Markov process where one present state encapsulates any past state information necessary to determine a future state, then , no other past status information should be kept for this purpose - with unknown parameters, and hidden parameters are determined from observable data), as well as other Petition 870190075890, of 07/08/2019, p. 45/79 43/69 functionality required to resolve particular instances of gesture recognition. [000106] Filters 519 are loaded and implemented on top of the base recognition mechanism 517 and can use services provided by the mechanism 517 for all filters 519. In one embodiment, the base recognition mechanism 517 processes the received data to determine if it meets the requirements of any filter 519. Since these services provided, such as analyzing the input, are provided once by the base recognition mechanism 517 instead of by each filter 519, such service only needs to be processed once once in a period of time opposite to once per filter 519 for this period, then the processing required to determine the gestures is reduced. [000107] An application can use the 519 filters provided by the recognition mechanism 190 or it can provide its own filter 519, which connects to the base recognition mechanism 517. In one embodiment, all 519 filters have a common interface to allow this connection feature. In addition, all 519 filters can use parameters 528, so a single management tool, as described below, can be used to debug and adjust the entire 519 filter system. [000108] These parameters 528 can be adjusted for an application or an application context via a gesture tool 521. In one embodiment, the gesture tool 521 comprises a plurality of sliders 523, each slider 523 corresponds to a parameter 528, as well as a pictorial representation of a body 524. As a parameter 528 is adjusted with a corresponding slider 523, body 524 can demonstrate both the actions that can be recognized and the gesture with Petition 870190075890, of 07/08/2019, p. 46/79 44/69 those parameters 528 as the actions that cannot be recognized as the gesture with those parameters 528, identified as such. This visualization of gesture parameters 528 provides an effective means of both debugging and fine-tuning a gesture. [000109] Figure 6 shows a system 600 that can comprise a capture device 608, a computing device 610 and a display device 612. For example, the capture device 608, the computing device 610 and the display device 612 can each comprise any suitable device that performs the desired functionality, such as the devices described in relation to Figures 1-5B. It is envisaged that a single device can perform all functions in the 600 system, or any suitable combination of devices can perform the desired functions. For example, computing device 610 can provide the functionality described in relation to computing environment 12 shown in Figure 2 or the computer in Figure 3. As shown in Figure 2, computing environment 12 can include the display device and a processor. The computing device 610 may also comprise its own camera component or may be coupled to a device that has a camera component, such as the capture device 608. [000110] In this example, a depth camera 608 captures a scene in a physical space 601 in which a user 602 is present. The depth camera 608 processes depth information and / or provides depth information to a computer, such as computer 610. Depth information can be interpreted to display a visual representation of the user 602. For example, depth camera 608 or, as shown, a computing device 610 to which it is attached, can output to a display 612. Petition 870190075890, of 07/08/2019, p. 47/79 45/69 [000111] The visual representation of a user 602 in physical space 601 can take any form, such as an animation, a character, an avatar, or similar. For example, the visual representation of the target, such as a user 602, can initially be a digital clay mass that user 602 can sculpt in the desired shapes and sizes, or a character representation, such as the monkey 604 shown in display device 612. The visual representation can be a combination of user attributes 602 and an animation or stock model. The visual representation can be a stock model equipped with the 600 system or application. For example, the user 602 can select from a variety of stock models that are provided by a gaming application. In a baseball game app, for example, the options for visually representing the 602 user can take any shape, from a representation of a well-known baseball player to a piece of caramel or an elephant to an imaginary character or symbol , such as a cursor or hand symbol. The stock model can be modified with user attributes that are detected by the system. The visual representation can be specific to an application, such as, packaged with a program, or the visual representation can be available through applications or the broad system available. [000112] The exemplary visual representation shown in Figure 6, as shown on display device 612, is that of a monkey character 603. Although the additional frames of image data can be captured and displayed, the frame shown in Figure 6 is selected for exemplary purposes. The rate at which the image data frames are captured and displayed can determine the level of continuity of the displayed movement of the visual representation. Also, it is noted that an alternative or additional visual representation can correspond to another target in physical space 601, such as, Petition 870190075890, of 07/08/2019, p. 48/79 46/69 another user or a non-human object, or the visual representation can be a partial or totally virtual object. [000113] System 600 can capture information about physical space 601, such as depth information, image information, RGB data, etc. According to one embodiment, the image data can include a depth image or an image from a 608 depth camera and / or RGB camera, or an image from any other detector. For example, camera 608 can process image data and use it to determine the shape, colors and size of a target. Each target or object that matches the human pattern can be scanned to generate a model, such as a skeleton model, a flow model, a human mesh model, or the like associated with it. For example, as described above, depth information can be used to generate a user skeleton model, such as that shown in Figure 5A, where the system identifies the user's body parts, such as the head and limbs. Using, for example, depth values in a plurality of observed pixels that are associated with a human target and the extent of one or more aspects of the human target, such as height, head width or shoulder width, or similar, the size of the human target can be determined. [000114] The system 600 can track the movements of the user's members by analyzing the captured data and converting this into the skeleton model. The 600 system can then track the skeleton model and map the movement of each body part to a respective portion of the visual representation. For example, if the user 602 shakes his arm, the system can capture this movement and apply it to the arm of the virtual monkey 603, so that the virtual monkey also shakes his arm. In addition, the 600 system can identify a gesture from the user's movement when evaluating the user's position in a single frame Petition 870190075890, of 07/08/2019, p. 49/79 47/69 capture data or through a series of frames and apply the gesture to the visual representation. [000115] The system can use the captured data, such as scanned data, image data or depth information to detect the characteristics. Detectable features can include any features related to the user or physical space that are detectable by the 600 system. For example, detectable features can include target features (eg, user's facial attributes, hair color, voice analysis, etc.). ), gestures (that is, gestures performed by the user and recognized by the 600 system), historical data (data, such as user trend data that is detected by the system and can be stored), application status (for example, failure / success in a game application), or any feature detectable by the system that may be indicative of a user's temperament or can be used to deduce a user's temperament. [000116] The system can analyze one or more detectable characteristics to deduce a user's temperament. The deduction can be based on an inference or assumption or it can be based on scientific methods, such as the results of a study of temperaments and correlated characteristics. In this way, the deduction can be based on a simple analysis of the typical characteristics that indicate a particular temperament, the identity of a gesture that indicates a specific temperament, a comparison of the detectable attributes to an in-depth analysis of psychology and the characteristics that correlate different temperaments, or similar. [000117] The target characteristics can include information that can be associated with the particular user 602, such as, behaviors, speech patterns, facial expressions, skeleton movements, spoken words, historical data, voice recognition information, Petition 870190075890, of 07/08/2019, p. 50/79 48/69 or similar. The target characteristics can comprise any attributes of the target, such as: size, type and color of eyes; length, type and color of hair; skin color; clothes and clothes colors. For example, colors can be identified based on a corresponding RGB image. Other target characteristics for a human target can include, for example, height and / or arm length and can be obtained based, for example, on a body scan, a skeleton model, the extent of a 602 user in an area pixel or any other suitable process or data. The computer system 610 can use body recognition techniques to interpret the image data and can scale and shape the visual representation of the user 602 according to the size, shape and depth of the user appendages 602. [000118] As described, the system 600 can identify data from the physical space that includes an indication of the user's temperament. For example, the 600 system can gather information related to the user's movements, facial expressions, body language, emotions, etc., in the physical space. System 10 can use body posture recognition techniques to assist in identifying the emotions or temperament of the human target 18. For example, system 600 can analyze and track a user's skeleton model to determine how the user moves. The 600 system can track the user's body and the movements made by the user's body, including gestures that control aspects of the system, such as, application, operating system, or similar. The system can identify the user's body posture, facial expressions, expressions and vocal tone, targeted contemplations, etc. The user's vocal expressions can provide an indication of the user's temperament. For example, the language used, the tone of voice, the pitch, volume, and the like can convey a sense of the user's temperament. For example, Petition 870190075890, of 07/08/2019, p. 51/79 49/69 a rude tone can be interpreted as anger or aggression. Other tones can be tense, modal, breathing, whispering, creaky, calm, excited, happy, or any other tone. In this way, the user's characteristics are good indicators of the user's temperament. [000119] The system can apply at least one of the target characteristics detected by the user, as captured by the 600 system, to the visual representation of the user. For example, the system can detect that the user is wearing glasses and a red shirt and the system applies glasses and a red shirt to the virtual monkey 603 which, in this example, is the visual representation of the user. The system can identify the user's facial movements, such as the user's eyebrow movement and / or a frowning or smiling expression. The system can detect words spoken by the user and the user's tone of voice, or the user's body position, etc. For example, the system can detect a person's right arm and have the fidelity to distinguish the upper arm, lower arm, fingers, the thumb, joints in the fingers, etc. The system may be able to identify a user's shirt color that matches the user's upper and lower arms and apply the color appropriately to the visual representation. The system may be able to identify a ring on a finger or a tattoo on the user's hand, and based on the user model generated by the system, apply the detected target characteristics to the visual representation to mimic the user's attributes in the physical space. The visual representation can look like the user, move like the user, have clothes that look like the user's, etc. [000120] Certain target characteristics detected by the system and used to deduce the user's temperament may not be directly applied to the user, however, modified for display purposes. User characteristics can be modified to match the shape of the visual representation, the application, the status of the Petition 870190075890, of 07/08/2019, p. 52/79 50/69 application, etc. Certain characteristics may not directly map the visual representation of the user, where the visual representation is an imaginary character. For example, the user's character representation, such as the monkey 603 shown on the display device 612, can be determined body proportions, for example, which are similar to user 602, but modified for the particular character. The jack 603 can be provided with a height that is similar to the user 602, however, the jack arms can be proportionally longer than the user's arms. The movement of the monkey arms 604 can correspond to the movement of the user's arms, as identified by the system, however, the system can modify the animation of the monkey arms to reflect the way the monkey arms should move. [000121] In the example shown in Figure 6, the user is sitting with the head tilted to the side, a right shoulder resting on the knee, and the head being supported by the user's right hand. Facial expressions, body position, spoken words or any other detectable characteristic of the user can be applied to virtual monkey 603, and modified if appropriate. For example, the user is frowning in the physical space. The system detects this facial expression and applies a frown to the monkey, so that the virtual monkey also frowns. In addition, the monkey is sitting in a position similar to the user, modified except to match a type and body size of the monkey in this position. Similarly, the system can use the user's target characteristics to deduce the user's temperament, but then apply the attributes to the user's visual representation that are indicative of the temperament, however, they may or may not directly map the user's characteristics. Petition 870190075890, of 07/08/2019, p. 53/79 51/69 [000122] The 600 system can compare the detected target characteristics with a library of possible temperaments and determine which attributes should be applied to the user's visual representation. For example, as further described below in relation to Figures 7 and 8, computer 610 can store lookup tables with a compilation of temperament information. Lookup tables can include general or specific temperament information. The detected characteristics can be compared to the research tables to deduce the user's temperament. The analysis may include a comparison of the detected body position, facial expressions, vocal tone and words, gestures, historical data, or similar. [000123] Figure 7 shows an exemplary method of deducing a user's temperament and selecting attributes indicative of the temperament for a display of the visual representation that corresponds to the temperament. For example, in 702, the system receives data from a physical space that includes a user. As described above, a capture device can capture data from a scene, such as the scene's depth image and scan targets in the scene. The capture device can determine whether one or more targets in the scene correspond to a human target, such as a user. Each target or object that corresponds to the human body model can then be scanned to generate a skeleton model associated with it. The skeleton model, then, can be provided to the computational environment to track the skeleton model and render a visual representation associated with the skeleton model. [000124] In 704, the system can render a visual representation of the user. The visual representation can be based on the model, for example. The visual representation of a target in physical space 601 can take any form, such as an animation, a character, an avatar, or the like. The visual representation can initially be a Petition 870190075890, of 07/08/2019, p. 54/79 52/69 digital clay mass that the user 602 can sculpt in desired shapes and sizes, or a character representation, such as the monkey 604. The visual representation can be directly modeled based on the user attributes detected by the capture device or it could be an imaginary character who selected user attributes. The visual representation can be a combination of user attributes 602 and an animation or stock model. [000125] The system can track the user and detect user attributes that are indicative of the user's temperament in 706. For example, the system can track the user's facial expressions and body movements to identify a temperament and then apply that temperament, so that the avatar reflects the user’s emotions. The system can use any detectable attributes to assess the user's temperament for application to visual representation. The system can analyze the attributes detected in 708, and deduce a user's temperament. For example, a processor in the system can store lookup tables or databases with temperament information. The user's detected attributes can be compared to the attributes in the database or research table that are indicative of different temperaments. For example, the lookup table can define attributes that are indicative of a sad temperament. Such attributes can be a frown, tears, a low, calm vocal tone, and arms folded across the chest. If any or all of these attributes of a user in the physical space are detected, the processor can deduce that the user is exhibiting a sad temperament. [000126] The search tables or database, for example, can be applicable to an application or can be system wide. For example, a game application can define attributes that indicate the various temperaments applicable to the game. The temperaments Petition 870190075890, of 07/08/2019, p. 55/79 Defined 53/69 can include general and specific temperaments and can identify temperaments by comparing one or more entries (ie, detected attributes) to the attributes that define each temperament. Also, it is noted that the references to a research table or database are exemplary, and it is contemplated that the temperament information related to the techniques described in this document can be accessed, stored, packaged, provided, generated, or similar , in any suitable way. [000127] Alternatively or in combination, the system can identify a temperament request gesture from the data captured in relation to the user in 710. For example, the user can perform a gesture that requests that a particular gesture be applied visual representation of the user. [000128] In 712, the system can select attributes to apply to the visual representation of the user that reflect the temperament deduced or identified from the user's gesture. The attributes applicable to a particular temperament can be found in research tables or also in a database. The selected attributes can be the user attributes detected by the capture device and / or the selected attributes can be animations that reflect the temperament. For example, if the system deduces that the user displays attributes indicative of a “sad” temperament, the research tables can indicate several animations that can reflect that temperament. The system can select any of these attributes and apply them to the visual representation of the user. [000129] The application of the attributes to the visual representation in 714 can occur in real time. In this way, the data captured in relation to the user's mood or emotions, together with the analysis of body recognition, etc., can be performed in real time and applied to the visual representation of the user in real time. The user, therefore, Petition 870190075890, of 07/08/2019, p. 56/79 54/69 can see a real-time display of the user's emotions or temperament. [000130] The system can continue to track the user and any movement in the physical space over time in 716 and apply modifications or updates to the visual representation in 718 to reflect changes in temperament. For example, updates can be based on changes in detected attributes and user historical data. At any time, the capture device can identify a user's behaviors and mannerisms, emotions, speech patterns, or the like to determine the user's temperaments and apply these to the user's visual representation. Updates can be applied to the visual representation in real time. For example, it may be desirable for the system to capture the user's expressions and imitations over time to reflect the user's temperament through visual representation. [000131] Figure 8 shows an example of a lookup table 800 that can be used to deduce the user's temperament. The exemplary temperament lookup table 800 shown in Figure 8 includes categories of detectable characteristics, such as a facial expression 802, vocal tone 804, vocal volume 806, speech 808, body position 810, gesture 812, application results 814, and historical data 816. The detected attributes or characteristics can include any attribute in the physical space for which the system can capture information through the capture device, including detectable target characteristics, application status, etc. The categories in the lookup table 800 are exemplary, as any number and type of categories can be part of the user's temperament analysis. For example, categories can additionally include a detected interaction with other users or objects, an analysis of the type Petition 870190075890, of 07/08/2019, p. 57/79 55/69 of clothing the user is wearing, other items on the user's body, etc. It is contemplated that any detectable attribute or characteristic of the user that can be captured by the 600 system in some way that can be used in part of the analysis of the user's attitude or temperament may be applicable. [000132] Three examples of detected characteristics are shown in graph 800 for three users, where each of the lines A, B and C represents the detected characteristics. The first portion of table 850 represents the detectable characteristics of the target captured in the scene. The second portion of table 860 represents other detectable characteristics, such as the identification of a gesture that is performed by the user, the status of the application and its results, and / or historical data specific to the user or application. The last portion of Table 870 represents the system's deduction of the user's temperament as a result of an analysis of the available detectable attributes. As stated, the categories in table 800 are for example purposes only and may be more or less inclusive of additional detectable characteristics. [000133] Line A represents an exemplary modality of the characteristics detected by the system. In line A, the system detects that a first user has a facial expression that includes a frown, the results in the application are a failure, and the historical data for the first user shows a user's tendency to frown after the results fail . A system analysis of these detected attributes may indicate that the temperament of the first user is generally negative. Possibly additional detectable attributes can provide a more specific temperament, however, with the available data, the system deduces the more general, generally negative temperament. Petition 870190075890, of 07/08/2019, p. 58/79 56/69 [000134] Regarding the second user, with the detectable characteristics established in line B, the system detects a frown facial expression, a concise vocal tone, with volume, calm, no words, however, the position of The user's body comprises an inclined posterior position, the head tilted to one side and supported by one hand. The system can determine from these attributes that the user's temperament is generally negative or possibly bored, tired, angry, sad, etc. The system can additionally detect, in relation to the second user, which is different from the user playing in the games application, that the user's different turn lasted a long time, and detect, from an analysis of the user's historical data , this user's temperament trends under these circumstances. With this data, the system can determine that the user's second temperament is not only generally negative, but specifically bored or disinterested. For example, the system can identify the tendency of the second user, when the second user is not the active player in the game application, to have facial expressions, tones, body positions, etc., which correspond to a bored temperament. [000135] It is contemplated, for example, that a frowning facial expression can correspond to many temperaments. The exemplary temperaments and attributes that indicate each of the particular temperaments shown in Table 800 are exemplary only. Each detectable characteristic can be used to limit temperament to a more specific attitude or mood, or the system can simply identify a general attitude, such as, generally negative or positive. [000136] The detectable characteristics of the third user, shown on line C, include a smiling facial expression, a happy tone that is also low, the words Yes and Impressive, and a position Petition 870190075890, of 07/08/2019, p. 59/79 57/69 body that includes raised arms and up and down heels. The up and down jump movement can also be indicative of a gesture applicable to the application that results in a successful game result for the third user. The comparison of these detectable characteristics with the user's historical data can also provide an indication of the likely temperament of the third user based on this information. In this example, the system deduces that the user's temperament, based on the detectable characteristics, is that excited. [000137] The system can simply map the user's real characteristics in the visual representation. In the exemplary mode where the visual representation maps directly to the user's detected attributes, the user's temperament is inherently demonstrated by the visual representation, as the visual representation reflects the user's detected attributes. However, the visual representation cannot always be a direct representation of the user, and then the system can modify the temperament to match the shape of the visual representation. By deducting the user's temperament, the system can determine appropriate animations to apply to the user's visual representation that reflect this temperament. [000138] For example, Figure 6 shows the application of facial expressions, body position of the user, etc., for the visual representation 603 of the user, modified to represent the corresponding attributes of the monkey character. The monkey is frowning, however, the monkey's mouth may not be a mapping of the user's mouth, but preferably, the system can apply the detected frown to the virtual monkey's mouth in the way that it might appear if a monkey frowned. eyebrows. Conversion of the user's temperament to the user's visual representation can occur in many ways Petition 870190075890, of 07/08/2019, p. 60/79 58/69 shapes and can understand any number of animations. For example, if a user's visual representation is a house, each may not be animated with facial attributes. In this way, the system can map the temperament in the house by converting the user's temperament into a new form. For example, if the system detects that the user has a sad temperament, detected based on the user's facial expressions or body position, the system can convert this into the house by displaying arched virtual house windows, and the house animation. so that it looks bloated and then let the air out of the front door, providing an appearance that the house sighed. [000139] A system can deduce a temperament that can be a mood or attitude of the user based on the detectable characteristics. A temperament can include any representation of a user's emotional response that expresses the user's feelings or thoughts. An identified temperament can be generally positive or negative, or it can be ambivalent. The identified attitude can be more specific, such as, happy, furious, frustrated, bored, sad, etc. The specificity of the attitude may depend on the library of attitudes / emotions / moods, and the 600 system can identify a range of user attitudes, from general to specific. For example, the system can determine from the detectable attributes of the user's upright body position and the optimistic vocal tone that the user generally has a positive attitude. Alternatively, the system can determine, more specifically, that the user is aroused because the upright body position includes jumping up and down, raised arms, and user history data indicate that these detectable characteristics indicate an excited temperament. Different apps can have a wider database of both general and specific moods and temperaments, and other apps can deduce Petition 870190075890, of 07/08/2019, p. 61/79 59/69 to develop general temperaments, such as, generally positive or generally negative. [000140] The greater number of detectable attributes can increase the fidelity of the system analysis of the user's attitude. Changes in a user's body posture can be strong indicators of a user's temperament. A user's posture can include the user's body position, the way the user stands, sits, holds his chest, and where the user places his legs and feet. For example, if a user leans back with his head dropped to one side, where the head is held by the user's hand, the system can identify that the user's temper is bored or disinterested. Or, for example, if a user is sitting upright with his head upright and his arms folded across his chest, with a pursed lips expression, the system can identify the user's temperament as disagreement, defensive or frustrated. In general, a negative connotation can be reflected in the user's avatar. The system can detect a change in the user's body posture as a result of the tightening of the muscles in the user's neck or shoulders. Sometimes a user's relaxed posture is simply an indication that a user is relaxing or perhaps has poor posture. A user's head position can be an indication of a user's temperament. The system can detect jaw tightening or pucker hates the user. [000141] Figure 9 shows the system 600 shown in Figure 6, where the system tracks the user's detectable attributes and deduces a temperament. Temperament can be reflected in the user's visual representation by mapping the user's detectable attributes to the visual representation. The temperament can also be reflected by an animation application that corresponds to a particular temperament in the visual representation of the user. Figure 9 shows user 602 in Petition 870190075890, of 07/08/2019, p. 62/79 60/69 three points in time in physical space 601, where 901a, 901b, and 901c represent physical space at three distinct points in time. At each point in time, user 602 may have changed, altered facial expressions, performed a different movement and / or moved the body position. System 600 can capture target user 602, in physical space 601, at each point and capture the user's detectable attributes at each point, shown at 902a, 902b, and 902c. Two examples of the display resulting from a visual representation of the user 602 are shown in exemplary display 912a and exemplary display 912b. [000142] As discussed above, a visual representation of a user can be any animation, character, avatar, or similar. The exemplary visual representations shown in Figure 9 are an avatar 905 (shown on the display device 912a) or a character 907 (shown on the display device 912b). The avatar 905, for example, can be a close representation of the user in the physical space, which maps the user's body position, hair color, clothes, etc. A character 907, for example, can be a character representation, such as the monkey shown. Character 907 can also have user characteristics captured by the 600 system. For example, facial expressions, clothing, etc., can be mapped into the character representation. [000143] System 600 can identify data from the physical space that includes an indication of the user's temperament. The 600 system can apply the user's temperament to the visual representation by applying the indicative attributes of the temperament to the visual representation of the user. In addition, the system 600 can identify a gesture from the user's movement when evaluating the user's position in a single frame of capture data or across a series of frames. The 600 system can use a combination of information from each data frame, based on changes in the data captured between Petition 870190075890, of 07/08/2019, p. 63/79 61/69 data frames and over time, the gestures identified from the captured data, and any other available information, such as voice data, to identify a user's temperament or emotion. [000144] In an exemplary mode, the avatar can be provided with 905 characteristics that are determined from the analysis of the image data. The user 602 can choose a visual representation that is mapped in the attributes of the user 602, where the characteristics of the user 602, physical or otherwise, are represented by the visual representation. The visual representation of user 602, also called an avatar, such as avatar 905, can be initialized based on user attributes 602, such as body proportions, facial attributes, etc. For example, the skeleton model can be the base model for the generation of a visual representation of the user 602, modeled after the proportions, length, weight of members of the user 602, etc. Then, the 602 user's hair color, skin, clothing and other detected characteristics can be mapped in the visual representation. [000145] The mapping of the user's movement may not be a direct conversion of the user's movement, as the visual representation can be adapted to the modification. For example, the visual representation of the user can be an imaginary character without facial attributes. The system can reflect a user's temperament in other ways that are applicable to the form of visual representation. In this way, the user's movements can be converted to map the visual representation with some animation added to reflect the shape of the visual representation. For example, in Figure 9, the visual representation of the user shown on the display device 912b is that of a monkey character 907. Due to the fact that the representation Petition 870190075890, of 07/08/2019, p. 64/79 62/69 visual 907 of user 602 not being a representation of the user's own physical structure, the movement and / or temperament of user 602 can be converted to be consistent with the form that visual representation 907 adopts. In this example, for example, the detected attributes and / or temperaments can be converted to be consistent with the attributes of a 907 monkey. [000146] User characteristics that can also be indicative of the user's temperament can be mapped in the visual representation based on the analysis of the system of detectable characteristics that thus imitate the appearance and / or movement of the user in the physical space. In this example, the system tracks the user's detectable characteristics in physical space at three points in time, 901a, 901b, and 901c. The user can detect that the user in position 902a is sitting with his head tilted to one side and supported by one hand. The 902a user may be frowning and may be making sounds or saying words that are indicative of a bored or frustrated temper. In this way, the system can analyze the detectable characteristics over time, and deduce the user's temperament. [000147] In this example, the system deduces a bored temperament from the user. The system can deduce the user's temperament from the data captured from the physical space at point 901a. The system can continue to track the user's detectable attributes and the physical space at 901b and 901c represents user examples at different points in time. The system can apply attributes indicative of the temperament deduced based on a single frame of captured data, such as, the data captured from the scene in physical space 901a, or over time as a result of multiple frames of captured data, such as like, data captured from all three scenes 901a, 90b, 901c. The system can apply attributes Petition 870190075890, of 07/08/2019, p. 65/79 63/69 indicative of the temperament deduced based on a single frame and / or over time. Confidence in deduced temperament can increase based on a continuous analysis of the user's detectable characteristics. Alternatively, the system can detect or deduce a different temperament based on changes in the detectable characteristics. [000148] The system, in real time, can display the characteristics detected by applying them to the visual representation of the user. Thus, as shown in Figure 6, visual representation 603 shows numerous characteristics detected by the user (for example, facial expression, body position, etc.). Similarly, the system can use the user's target characteristics to deduce the user's temperament, however, then apply the attributes to the user's visual representation that are indicative of the temperament, however, which may or may not directly map the user's characteristics. . For example, the system can deduce, from the characteristics detected, that the user probably has an excited and happy temperament. The detected characteristics that indicate this temperament can be characteristics, such as a movement of jumping up and down, shouting excitedly, a successful activity in a game application and a smile. The system can compare these characteristics in a database, with the characteristics that indicate different temperaments, for example, to deduce the user's temperament. The system can apply the target's characteristics directly to the visual representation as these characteristics can be good examples of attributes that are indicative of temperament. However, the system can apply in an alternative or additional way, attributes that are indicative of temperament, regardless of whether the attributes applied are a direct mapping of the user's characteristics or not. For example, if the system deduces a happy temperament Petition 870190075890, of 07/08/2019, p. 66/79 64/69 and excited by the user's detectable attributes, the system can animate the visual representation of the user to perform a dance on the screen or animate the user to jump up to the sky and grab a star. The system can apply other attributes indicative of temperament, such as blinking words on the display device (for example, I'm really happy or something funny or silly). [000149] In Figure 9, the exemplary animation of avatar 905, which has numerous detectable user characteristics, is that of avatar 905 who holds his head against the wall saying, I am bored. User 602 is not performing this action and may not be saying these words at any point, as captured by the system, however, the system can apply these attributes to the user because they are indicative of a bored temperament. Similarly, display device 912b shows an exemplary display of visual representation, where monkey character 907 is shown dragging his arms and very slowly making a monkey sound, Ooh. Ooh. Ah. Ah. The attributes applied to the monkey are indicative of a bored temperament. Attributes can be identified by the system based on search tables, for example, and can be specific to the character, such as the monkey, or the attributes can generally be applicable to many types of visual representations. [000150] The 907 representation of avatar 905 and monkey are two different exemplary visual representations that can be displayed, and are shown on the exemplary display devices 912a and 912b. Each visual representation 905, 907 and application of attributes indicative of the user's temperament 602 can be based on a single set of captured data, such as those captured in relation to physical space at time 901a. Alternatively, both exemplary displays of each visual representation 905, 907 can be a result of the system that monitors user 602 while Petition 870190075890, of 07/08/2019, p. 67/79 65/69 over time. The user can use the capture data over time to update the user's temperament, add more attributes to the visual representation, apply attributes that are indicative of a more specific temperament, or similar. [000151] The user 602 can perform gestures that result in an application of attributes indicative of a particular temperament in the visual representation of the user. A temperament identity gesture can be a gesture that is interpreted as a request to apply attributes indicative of a particular temperament to the visual representation of the user. For example, the system's detection of a user's "bored" temperament in Figure 9 may be a result of the system's recognition of a user's gesture in the physical space that indicates a "bored" temperament. The gesture can comprise, for example, the user's body position at 902c, where the arms are folded along the chest. To differentiate movement from a user's movement simply by remaining in this mode, the gesture may comprise a dramatic retention of the arms in position, or a slow movement of the arms to be bent along the chest. A gesture recognition mechanism, such as the gesture recognition mechanism 192 described in relation to Figure 5B, can compare the user's movement with the gesture filters that correspond to the gestures in a gesture library 190. The captured movement of the user 602 can correspond to a temperament identity gesture 196 in the gesture library 190, for example. In this way, the application of such attributes in a visual representation can be an aspect of the operating system and / or application that can be controlled or recognized from the user's gestures. [000152] A temperament identity gesture may or may not comprise characteristics that are typically associated with a particular temperament. For example, a gesture for a temper Petition 870190075890, of 07/08/2019, p. 68/79 66/69 “sad” can be a hand movement, where hand movement is not a characteristic that a person typically does when they have a “sad” temperament. However, the hand movement can be a gesture that the user can perform to direct the system to apply attributes indicative of a “sad” temperament to the visual representation. The user, therefore, can control the temperament of the user's visual representation by performing gestures in the physical space. A user can intentionally or unintentionally perform a gesture that corresponds to a temperament. For example, a gesture for the user's hands folded over his chest can be a gesture recognized as a temper of frustration and the user can simply conduct the movement that corresponds to the gesture because the user is feeling frustrated. [000153] The system's recognition of a gesture that indicates that the user is frustrated, together with an expression of the user, such as a frown, can result in a visual representation that reflects a frustrated temperament. Alternatively, the user can intentionally perform a gesture in the physical space to cause a particular temperament to be applied to the visual representation of the user. For example, the user may have just won a game or done something successful in an application. A gesture for “happy” temperament can comprise a jump up and down of the user with the movement of raised arms. The user can perform the “happy” temperament gesture that causes the system to apply the target characteristics and / or any number of “happy” attributes to the visual representation of the user. For example, as described above, the visual representation of the user can perform a somersault, or perform a dance, or any other activity that the system associates with an expression of the happy temperament. Thus, although gestures in the virtual space can act as controls Petition 870190075890, of 07/08/2019, p. 69/79 67/69 of an application, such as an electronic game, they can also correspond to a request through the user for the system to reflect a particular temperament in the visual representation of the user. [000154] System 600 can update the user's temperament in the visual representation of the user by monitoring the detectable characteristics. System 600 can use a combination of information from each data frame, such as that captured from the user at points 901a, 901b, 901c, from changes in data captured between data frames and over time , the gestures identified from the captured data, the target characteristics and changes in time of the target characteristics, and any other available information, such as facial expressions, body posture, voice data, etc., to identify and update a temperament as this is reflected by the visual representation of the user. [000155] The target characteristics associated with a user in the physical space can become part of a profile. The profile can be specific to a particular physical space or a user, for example. Avatar data, which includes user attributes, can become part of the user's profile. A profile can be accessed by entering a user in a capture scene. If a profile matches a user based on a password, selection through the user, body size, speech recognition, or the like, then the profile can be used in determining the visual representation of the user. [000156] The historical data for a user can be monitored, storing information in the user's profile. For example, the system can detect user-specific attributes, such as user behaviors, speech patterns, emotions, sounds, or the like. The system can apply these attributes to the visual representation of the user by applying a temperament to the visual representation. For example, Petition 870190075890, of 07/08/2019, p. 70/79 68/69 if the system identifies the user's temperament and selects an attribute that understands speech to reflect the temperament, the visual representation voice can be standardized based on the user's speech patterns or it can also be a recording of the user's own voice user. [000157] The specific information can also include trends in game modes through one or more users. For example, if a user tends to behave or react in a certain way, the system can track the user's tendencies to more accurately deduce the user's temperament. For example, if the system detects the user's body positions that are indicative of “angry” temperaments, and the user tends to behave in a similar way every time the user crashes the application (such as a game), the system can track this information. In this way, the system can begin to track user trends and use this information to more accurately estimate the user's temperament. [000158] It should be understood that the configurations and / or approaches described in this document are exemplary, and that these specific modalities or examples should not be considered limiting. The specific routines or methods described in this document can represent one or more among numerous processing strategies. As such, several illustrated actions can be performed in the illustrated sequence, in other parallel sequences, or the like. Likewise, an order of the processes described above can be changed. [000159] Furthermore, although the present description has been described in connection with the particular aspects, as illustrated in the various Figures, it is understood that other similar aspects can be used or modifications and additions can be made in the aspects described to accomplish the same function of this description without leaving Petition 870190075890, of 07/08/2019, p. 71/79 69/69 of this. The subject matter of the present description includes all new and non-obvious combinations and sub-combinations of the various processes, systems and configurations, and other attributes, functions, actions, and / or properties described in this document, as well as any and all equivalents of these. In this way, the methods and apparatus of the described modalities, or certain aspects or portions thereof, may adopt the form of program code (that is, instructions) incorporated in the tangible medium, such as, floppy disks, CD-ROMs, hard drives, or any other machine-readable storage medium. When the program code is loaded and executed by a machine, such as a computer, the machine becomes a device configured to practice the described modalities. [000160] In addition to the specific implementations explicitly set out in this document, other aspects and implementations will be apparent to those skilled in the art from consideration of the specification described in this document. Therefore, the present description should not be limited to any aspect, but, preferably, built in amplitude and scope according to the attached embodiments. For example, the various procedures described in this document can be implemented in hardware or software, or a combination of both.
权利要求:
Claims (13) [1] 1. Method (700) for applying attributes indicative of a user's temperament to a visual representation, the method characterized by the fact that it comprises the steps of: render (704) the visual representation of a user; receiving (702) data from a physical space, in which the data is representative of the user in the physical space; analyzing (708) at least one detectable characteristic to deduce the user's temperament, wherein the at least one detectable characteristic comprises an application status; select (712) an animation to apply to the visual representation that reflects the temperament of the deducted user; and apply (714) the animation indicating the user's temperament to the visual representation. [2] 2. Method (700), according to claim 1, characterized by the fact that applying (714) attributes indicative of the user's temperament to the visual representation is performed in real time in relation to the receipt of data from the physical space. [3] 3. Method (700), according to claim 1 or 2, characterized by the fact that the at least one detectable characteristics further comprises at least one among a user characteristic, a physical user attribute, a user behavior, a user speech pattern, user voice, gesture or historical data. [4] 4. Method (700) according to any one of claims 1 to 3, characterized by the fact that the data is representative of at least one of the characteristics of a user in the physical space. [5] 5. Method (700) according to any one of the claims Petition 870190075890, of 07/08/2019, p. 73/79 2/3 sessions 1 to 4, characterized by the fact that it still comprises applying at least one of the characteristics of a user to the visual representation. [6] 6. Method (700) according to any one of claims 1 to 5, characterized by the fact that analyzing (708) the detectable characteristics to deduce the user's temperament comprises a comparison of at least one of the detectable characteristics to a table that correlates characteristics to a particular temperament. [7] Method (700) according to any one of claims 1 to 6, characterized by the fact that the user's temperament comprises at least a generally negative, generally positive, ambivalent, bored, happy, sad, frustrated, excited or Angry. [8] 8. Method (700) according to any one of claims 1 to 7, characterized by the fact that it still comprises: tracking (706) changes in at least one detectable characteristic to deduce changes in the user's temperament; and apply (718) updates to the attributes indicative of the user's temperament to match the changes deduced in the user's temperament. [9] 9. Method (700) according to any one of claims 1 to 8, characterized by the fact that it further comprises selecting (712) attributes indicative of the user's temperament from a plurality of attributes corresponding to the user's temperament. [10] 10. Computer readable medium (222, 223, 224, 253, 254) characterized by the fact that it comprises a method (700) stored in it, which, when executed by a processor (195, Petition 870190075890, of 07/08/2019, p. 74/79 3/3 259), causes the processor (195, 259) to perform the steps of method (700) as defined in any of claims 1 to 9. [11] 11. System (12,100) for applying attributes indicative of a user's temperament to a visual representation, the system characterized by the fact that it comprises: a processor (195), where the processor (195, 259) executes a method (700), where the method (700) comprises the steps of: render (704) the visual representation of the user; receiving (702) data from a physical space, in which the data is representative of the user in the physical space; analyzing (708) at least one detectable characteristic to deduce the user's temperament, in which at least one detectable characteristic comprises an application status; select (712) an animation to apply to the visual representation that reflects the temperament of the deducted user; and apply (714) the animation indicating the user's temperament to the visual representation. [12] 12. System (12, 100), according to claim 11, characterized by the fact that applying attributes indicative of the user's temperament to the visual representation is performed in real time in relation to the receipt of data from the physical space. [13] 13. System (12, 100) according to claims 11 or 12, characterized by the fact that it still comprises a memory (112) that stores a table that provides characteristics that correlate to a particular temperament.
类似技术:
公开号 | 公开日 | 专利标题 US9519989B2|2016-12-13|Visual representation expression based on player expression US9824480B2|2017-11-21|Chaining animations JP5632474B2|2014-11-26|Method and system for making visual display live-action through input learned from user JP5782440B2|2015-09-24|Method and system for automatically generating visual display US20100302138A1|2010-12-02|Methods and systems for defining or modifying a visual representation JP5775514B2|2015-09-09|Gesture shortcut
同族专利:
公开号 | 公开日 EP2451544A4|2016-06-08| EP2451544B1|2019-08-21| KR20120049218A|2012-05-16| JP2012533120A|2012-12-20| JP5661763B2|2015-01-28| RU2011154346A|2013-07-10| US9519989B2|2016-12-13| CN102470273B|2013-07-24| EP2451544A2|2012-05-16| WO2011005784A3|2011-05-05| RU2560794C2|2015-08-20| US8390680B2|2013-03-05| US20110007142A1|2011-01-13| CN102470273A|2012-05-23| US20130187929A1|2013-07-25| EP3561647A1|2019-10-30| BR112012000391A2|2018-02-06| EP3561647B1|2020-11-18| KR101704848B1|2017-02-08| WO2011005784A2|2011-01-13|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 US7014A|1850-01-15|Folding bedstead | US4288078A|1979-11-20|1981-09-08|Lugo Julio I|Game apparatus| US4711543A|1986-04-14|1987-12-08|Blair Preston E|TV animation interactively controlled by the viewer| US4695953A|1983-08-25|1987-09-22|Blair Preston E|TV animation interactively controlled by the viewer| US4630910A|1984-02-16|1986-12-23|Robotic Vision Systems, Inc.|Method of measuring in three-dimensions at high speed| US4627620A|1984-12-26|1986-12-09|Yang John P|Electronic athlete trainer for improving skills in reflex, speed and accuracy| US4645458A|1985-04-15|1987-02-24|Harald Phillip|Athletic evaluation and training apparatus| US4702475A|1985-08-16|1987-10-27|Innovating Training Products, Inc.|Sports technique and reaction training system| US4843568A|1986-04-11|1989-06-27|Krueger Myron W|Real time perception of and response to the actions of an unencumbered participant/user| US4796997A|1986-05-27|1989-01-10|Synthetic Vision Systems, Inc.|Method and system for high-speed, 3-D imaging of an object at a vision station| US5184295A|1986-05-30|1993-02-02|Mann Ralph V|System and method for teaching physical skills| US4751642A|1986-08-29|1988-06-14|Silva John M|Interactive sports simulation system with physiological sensing and psychological conditioning| US4809065A|1986-12-01|1989-02-28|Kabushiki Kaisha Toshiba|Interactive system and related method for displaying data to produce a three-dimensional image of an object| US4817950A|1987-05-08|1989-04-04|Goo Paul E|Video game control unit and attitude sensor| US5239464A|1988-08-04|1993-08-24|Blair Preston E|Interactive video system providing repeated switching of multiple tracks of actions sequences| US5239463A|1988-08-04|1993-08-24|Blair Preston E|Method and apparatus for player interaction with animated characters and objects| US4901362A|1988-08-08|1990-02-13|Raytheon Company|Method of recognizing patterns| US4893183A|1988-08-11|1990-01-09|Carnegie-Mellon University|Robotic vision system| JPH02199526A|1988-10-14|1990-08-07|David G Capper|Control interface device| US5469740A|1989-07-14|1995-11-28|Impulse Technology, Inc.|Interactive video testing and training system| US4925189A|1989-01-13|1990-05-15|Braeunig Thomas F|Body-mounted video game exercise device| US5229756A|1989-02-07|1993-07-20|Yamaha Corporation|Image control apparatus| JPH03103822U|1990-02-13|1991-10-29| US5101444A|1990-05-18|1992-03-31|Panacea, Inc.|Method and apparatus for high speed object location| US5148154A|1990-12-04|1992-09-15|Sony Corporation Of America|Multi-dimensional user interface| JPH04325180A|1991-04-24|1992-11-13|Sanyo Electric Co Ltd|Emotional motion sensitive type game machine| US5534917A|1991-05-09|1996-07-09|Very Vivid, Inc.|Video image based control system| US5295491A|1991-09-26|1994-03-22|Sam Technology, Inc.|Non-invasive human neurocognitive performance capability testing method and system| US6054991A|1991-12-02|2000-04-25|Texas Instruments Incorporated|Method of modeling player position and movement in a virtual reality system| CA2101633A1|1991-12-03|1993-06-04|Barry J. French|Interactive video testing and training system| US5875108A|1991-12-23|1999-02-23|Hoffberg; Steven M.|Ergonomic man-machine interface incorporating adaptive pattern recognition based control system| US5417210A|1992-05-27|1995-05-23|International Business Machines Corporation|System and method for augmentation of endoscopic surgery| JPH07325934A|1992-07-10|1995-12-12|Walt Disney Co:The|Method and equipment for provision of graphics enhanced to virtual world| US5999908A|1992-08-06|1999-12-07|Abelow; Daniel H.|Customer-based product design module| US5320538A|1992-09-23|1994-06-14|Hughes Training, Inc.|Interactive aircraft training system and method| IT1257294B|1992-11-20|1996-01-12|DEVICE SUITABLE TO DETECT THE CONFIGURATION OF A PHYSIOLOGICAL-DISTAL UNIT, TO BE USED IN PARTICULAR AS AN ADVANCED INTERFACE FOR MACHINES AND CALCULATORS.| US5495576A|1993-01-11|1996-02-27|Ritchey; Kurtis J.|Panoramic image based virtual reality/telepresence audio-visual system and method| US5690582A|1993-02-02|1997-11-25|Tectrix Fitness Equipment, Inc.|Interactive exercise apparatus| JP2799126B2|1993-03-26|1998-09-17|株式会社ナムコ|Video game device and game input device| US5405152A|1993-06-08|1995-04-11|The Walt Disney Company|Method and apparatus for an interactive video game with physical feedback| US5454043A|1993-07-30|1995-09-26|Mitsubishi Electric Research Laboratories, Inc.|Dynamic and static hand gesture recognition through low-level image analysis| US5423554A|1993-09-24|1995-06-13|Metamedia Ventures, Inc.|Virtual reality game method and apparatus| US5980256A|1993-10-29|1999-11-09|Carmein; David E. E.|Virtual reality system with enhanced sensory apparatus| JP3419050B2|1993-11-19|2003-06-23|株式会社日立製作所|Input device| US5347306A|1993-12-17|1994-09-13|Mitsubishi Electric Research Laboratories, Inc.|Animated electronic meeting place| JP2552427B2|1993-12-28|1996-11-13|コナミ株式会社|Tv play system| US5577981A|1994-01-19|1996-11-26|Jarvik; Robert|Virtual reality exercise machine and computer controlled video system| US5580249A|1994-02-14|1996-12-03|Sarcos Group|Apparatus for simulating mobility of a human| US5597309A|1994-03-28|1997-01-28|Riess; Thomas|Method and apparatus for treatment of gait problems associated with parkinson's disease| US5385519A|1994-04-19|1995-01-31|Hsu; Chi-Hsueh|Running machine| US5524637A|1994-06-29|1996-06-11|Erickson; Jon W.|Interactive system for measuring physiological exertion| JPH0844490A|1994-07-28|1996-02-16|Matsushita Electric Ind Co Ltd|Interface device| US5563988A|1994-08-01|1996-10-08|Massachusetts Institute Of Technology|Method and system for facilitating wireless, full-body, real-time user interaction with a digitally represented visual environment| US6714665B1|1994-09-02|2004-03-30|Sarnoff Corporation|Fully automated iris recognition system utilizing wide and narrow fields of view| US5516105A|1994-10-06|1996-05-14|Exergame, Inc.|Acceleration activated joystick| US5638300A|1994-12-05|1997-06-10|Johnson; Lee E.|Golf swing analysis system| JPH08161292A|1994-12-09|1996-06-21|Matsushita Electric Ind Co Ltd|Method and system for detecting congestion degree| US5594469A|1995-02-21|1997-01-14|Mitsubishi Electric Information Technology Center America Inc.|Hand gesture machine control system| US5682229A|1995-04-14|1997-10-28|Schwartz Electro-Optics, Inc.|Laser range camera| US5913727A|1995-06-02|1999-06-22|Ahdoot; Ned|Interactive movement and contact simulation game| WO1996041304A1|1995-06-07|1996-12-19|The Trustees Of Columbia University In The City Of New York|Apparatus and methods for determining the three-dimensional shape of an object using active illumination and relative blurring in two images due to defocus| IL114278A|1995-06-22|2010-06-16|Microsoft Internat Holdings B|Camera and method| DE69635858T2|1995-06-22|2006-11-30|3Dv Systems Ltd.|TELECENTRIC 3D CAMERA AND RELATED METHOD| US5682196A|1995-06-22|1997-10-28|Actv, Inc.|Three-dimensional video presentation system providing interactive 3D presentation with personalized audio responses for multiple viewers| US5702323A|1995-07-26|1997-12-30|Poulton; Craig K.|Electronic exercise enhancer| US6308565B1|1995-11-06|2001-10-30|Impulse Technology Ltd.|System and method for tracking and assessing movement skills in multidimensional space| US6098458A|1995-11-06|2000-08-08|Impulse Technology, Ltd.|Testing and training system for assessing movement and agility skills without a confining field| US6430997B1|1995-11-06|2002-08-13|Trazer Technologies, Inc.|System and method for tracking and assessing movement skills in multidimensional space| US6073489A|1995-11-06|2000-06-13|French; Barry J.|Testing and training system for assessing the ability of a player to complete a task| EP1059970A2|1998-03-03|2000-12-20|Arena, Inc,|System and method for tracking and assessing movement skills in multidimensional space| US5933125A|1995-11-27|1999-08-03|Cae Electronics, Ltd.|Method and apparatus for reducing instability in the display of a virtual environment| US5774591A|1995-12-15|1998-06-30|Xerox Corporation|Apparatus and method for recognizing facial expressions and facial gestures in a sequence of images| US5641288A|1996-01-11|1997-06-24|Zaenglein, Jr.; William G.|Shooting simulating process and training device using a virtual reality display screen| CA2253626A1|1996-05-08|1997-11-13|Bruce R. Bacon|Real time simulation using position sensing| US6173066B1|1996-05-21|2001-01-09|Cybernet Systems Corporation|Pose determination and tracking by matching 3D objects to a 2D sensor| US5989157A|1996-08-06|1999-11-23|Walton; Charles A.|Exercising system with electronic inertial game playing| EP0959444A4|1996-08-14|2005-12-07|Nurakhmed Nurislamovic Latypov|Method for following and imaging a subject's three-dimensional position and orientation, method for presenting a virtual space to a subject, and systems for implementing said methods| RU2107328C1|1996-08-14|1998-03-20|Нурахмед Нурисламович Латыпов|Method for tracing and displaying of position and orientation of user in three-dimensional space and device which implements said method| JP3064928B2|1996-09-20|2000-07-12|日本電気株式会社|Subject extraction method| DE69626208T2|1996-12-20|2003-11-13|Hitachi Europ Ltd|Method and system for recognizing hand gestures| US6009210A|1997-03-05|1999-12-28|Digital Equipment Corporation|Hands-free interface to a virtual reality environment using head tracking| US6100896A|1997-03-24|2000-08-08|Mitsubishi Electric Information Technology Center America, Inc.|System for designing graphical multi-participant environments| US5877803A|1997-04-07|1999-03-02|Tritech Mircoelectronics International, Ltd.|3-D image detector| US6215898B1|1997-04-15|2001-04-10|Interval Research Corporation|Data processing system and method| JP3077745B2|1997-07-31|2000-08-14|日本電気株式会社|Data processing method and apparatus, information storage medium| US6188777B1|1997-08-01|2001-02-13|Interval Research Corporation|Method and apparatus for personnel detection and tracking| US6720949B1|1997-08-22|2004-04-13|Timothy R. Pryor|Man machine interfaces and applications| US6289112B1|1997-08-22|2001-09-11|International Business Machines Corporation|System and method for determining block direction in fingerprint images| AUPO894497A0|1997-09-02|1997-09-25|Xenotech Research Pty Ltd|Image processing method and apparatus| EP1017973A1|1997-09-24|2000-07-12|3DV Systems Ltd.|Acoustical imaging system| EP0905644A3|1997-09-26|2004-02-25|Matsushita Electric Industrial Co., Ltd.|Hand gesture recognizing device| US6141463A|1997-10-10|2000-10-31|Electric Planet Interactive|Method and system for estimating jointed-figure configurations| US6130677A|1997-10-15|2000-10-10|Electric Planet, Inc.|Interactive computer vision system| US6072494A|1997-10-15|2000-06-06|Electric Planet, Inc.|Method and apparatus for real-time gesture recognition| AU9808298A|1997-10-15|1999-05-03|Electric Planet, Inc.|A system and method for generating an animatable character| AU1099899A|1997-10-15|1999-05-03|Electric Planet, Inc.|Method and apparatus for performing a clean background subtraction| US6101289A|1997-10-15|2000-08-08|Electric Planet, Inc.|Method and apparatus for unencumbered capture of an object| US6176782B1|1997-12-22|2001-01-23|Philips Electronics North America Corp.|Motion-based command generation technology| US6181343B1|1997-12-23|2001-01-30|Philips Electronics North America Corp.|System and method for permitting three-dimensional navigation through a virtual reality environment using camera-based gesture inputs| US6466213B2|1998-02-13|2002-10-15|Xerox Corporation|Method and apparatus for creating personal autonomous avatars| US6159100A|1998-04-23|2000-12-12|Smith; Michael D.|Virtual reality game| US6077201A|1998-06-12|2000-06-20|Cheng; Chau-Yang|Exercise bicycle| US6950534B2|1998-08-10|2005-09-27|Cybernet Systems Corporation|Gesture-controlled interfaces for self-service machines and other applications| US20010008561A1|1999-08-10|2001-07-19|Paul George V.|Real-time object tracking system| US7121946B2|1998-08-10|2006-10-17|Cybernet Systems Corporation|Real-time head tracking system for computer games and other applications| US7050606B2|1999-08-10|2006-05-23|Cybernet Systems Corporation|Tracking and gesture recognition system particularly suited to vehicular control applications| US6801637B2|1999-08-10|2004-10-05|Cybernet Systems Corporation|Optical body tracker| US6681031B2|1998-08-10|2004-01-20|Cybernet Systems Corporation|Gesture-controlled interfaces for self-service machines and other applications| US7036094B1|1998-08-10|2006-04-25|Cybernet Systems Corporation|Behavior recognition system| IL126284A|1998-09-17|2002-12-01|Netmor Ltd|System and method for three dimensional positioning and tracking| EP0991011B1|1998-09-28|2007-07-25|Matsushita Electric Industrial Co., Ltd.|Method and device for segmenting hand gestures| US6501515B1|1998-10-13|2002-12-31|Sony Corporation|Remote control system| US6272231B1|1998-11-06|2001-08-07|Eyematic Interfaces, Inc.|Wavelet-based facial motion capture for avatar animation| JP2000163178A|1998-11-26|2000-06-16|Hitachi Ltd|Interaction device with virtual character and storage medium storing program generating video of virtual character| US6661918B1|1998-12-04|2003-12-09|Interval Research Corporation|Background estimation and segmentation based on range and color| US6147678A|1998-12-09|2000-11-14|Lucent Technologies Inc.|Video hand image-three-dimensional computer interface with multiple degrees of freedom| WO2000036372A1|1998-12-16|2000-06-22|3Dv Systems, Ltd.|Self gating photosurface| US6570555B1|1998-12-30|2003-05-27|Fuji Xerox Co., Ltd.|Method and apparatus for embodied conversational characters with multimodal input/output in an interface device| US6363160B1|1999-01-22|2002-03-26|Intel Corporation|Interface using pattern recognition and tracking| US7003134B1|1999-03-08|2006-02-21|Vulcan Patents Llc|Three dimensional object pose estimation which employs dense depth information| US6299308B1|1999-04-02|2001-10-09|Cybernet Systems Corporation|Low-cost non-imaging eye tracker system for computer control| US6512838B1|1999-09-22|2003-01-28|Canesta, Inc.|Methods for enhancing performance and data acquired from three-dimensional image systems| US6503195B1|1999-05-24|2003-01-07|University Of North Carolina At Chapel Hill|Methods and systems for real-time structured light depth extraction and endoscope using real-time structured light depth extraction| US6476834B1|1999-05-28|2002-11-05|International Business Machines Corporation|Dynamic creation of selectable items on surfaces| US6873723B1|1999-06-30|2005-03-29|Intel Corporation|Segmenting three-dimensional video images using stereo| US6738066B1|1999-07-30|2004-05-18|Electric Plant, Inc.|System, method and article of manufacture for detecting collisions between video images generated by a camera and an object depicted on a display| US7113918B1|1999-08-01|2006-09-26|Electric Planet, Inc.|Method for video enabled electronic commerce| EP1214609B1|1999-09-08|2004-12-15|3DV Systems Ltd.|3d imaging system| US6614422B1|1999-11-04|2003-09-02|Canesta, Inc.|Method and apparatus for entering data using a virtual input device| DE19960180B4|1999-12-14|2006-03-09|Rheinmetall W & M Gmbh|Method for producing an explosive projectile| JP2001209820A|2000-01-25|2001-08-03|Nec Corp|Emotion expressing device and mechanically readable recording medium with recorded program| US6674877B1|2000-02-03|2004-01-06|Microsoft Corporation|System and method for visually tracking occluded objects in real time| US6663491B2|2000-02-18|2003-12-16|Namco Ltd.|Game apparatus, storage medium and computer program that adjust tempo of sound| US6633294B1|2000-03-09|2003-10-14|Seth Rosenthal|Method and apparatus for using captured high density motion for animation| EP1152261A1|2000-04-28|2001-11-07|CSEM Centre Suisse d'Electronique et de Microtechnique SA|Device and method for spatially resolved photodetection and demodulation of modulated electromagnetic waves| US6640202B1|2000-05-25|2003-10-28|International Business Machines Corporation|Elastic sensor mesh system for 3-dimensional measurement, mapping and kinematics applications| US6731799B1|2000-06-01|2004-05-04|University Of Washington|Object segmentation with background extraction and moving boundary techniques| US6788809B1|2000-06-30|2004-09-07|Intel Corporation|System and method for gesture recognition in three dimensions using stereo imaging and color vision| US7227526B2|2000-07-24|2007-06-05|Gesturetek, Inc.|Video-based image control system| US20050206610A1|2000-09-29|2005-09-22|Gary Gerard Cordelli|Computer-"reflected" mirror| US7058204B2|2000-10-03|2006-06-06|Gesturetek, Inc.|Multiple camera control system| JP3725460B2|2000-10-06|2005-12-14|株式会社ソニー・コンピュータエンタテインメント|Image processing apparatus, image processing method, recording medium, computer program, semiconductor device| US7039676B1|2000-10-31|2006-05-02|International Business Machines Corporation|Using video image analysis to automatically transmit gestures over a network in a chat or instant messaging session| US6690618B2|2001-04-03|2004-02-10|Canesta, Inc.|Method and apparatus for approximating a source position of a sound-causing event for determining an input used in operating an electronic device| US6539931B2|2001-04-16|2003-04-01|Koninklijke Philips Electronics N.V.|Ball throwing assistant| JP2002331093A|2001-05-11|2002-11-19|Heiwa Corp|Game machine| US7348963B2|2002-05-28|2008-03-25|Reactrix Systems, Inc.|Interactive video display system| US8035612B2|2002-05-28|2011-10-11|Intellectual Ventures Holding 67 Llc|Self-contained interactive video display system| US7710391B2|2002-05-28|2010-05-04|Matthew Bell|Processing an image utilizing a spatially varying pattern| US7259747B2|2001-06-05|2007-08-21|Reactrix Systems, Inc.|Interactive video display system| US7170492B2|2002-05-28|2007-01-30|Reactrix Systems, Inc.|Interactive video display system| WO2003001722A2|2001-06-22|2003-01-03|Canesta, Inc.|Method and system to display a virtual input device| JP3420221B2|2001-06-29|2003-06-23|株式会社コナミコンピュータエンタテインメント東京|GAME DEVICE AND PROGRAM| US6937742B2|2001-09-28|2005-08-30|Bellsouth Intellectual Property Corporation|Gesture activated home appliance| US20030132950A1|2001-11-27|2003-07-17|Fahri Surucu|Detecting, classifying, and interpreting input events based on stimuli in multiple sensory domains| US20030165048A1|2001-12-07|2003-09-04|Cyrus Bamji|Enhanced light-generated interface for use with electronic devices| JP2003248841A|2001-12-20|2003-09-05|Matsushita Electric Ind Co Ltd|Virtual television intercom| US7340077B2|2002-02-15|2008-03-04|Canesta, Inc.|Gesture recognition system using depth perceptive sensors| AU2003219926A1|2002-02-26|2003-09-09|Canesta, Inc.|Method and apparatus for recognizing objects| US7310431B2|2002-04-10|2007-12-18|Canesta, Inc.|Optical methods for remotely measuring objects| JP2005526971A|2002-04-19|2005-09-08|アイイーイー インターナショナル エレクトロニクス アンド エンジニアリング エス.エイ.|Vehicle safety device| US9682319B2|2002-07-31|2017-06-20|Sony Interactive Entertainment Inc.|Combiner method for altering game gearing| US7050177B2|2002-05-22|2006-05-23|Canesta, Inc.|Method and apparatus for approximating depth of an object's placement onto a monitored region with applications to virtual interface devices| US7006236B2|2002-05-22|2006-02-28|Canesta, Inc.|Method and apparatus for approximating depth of an object's placement onto a monitored region with applications to virtual interface devices| US7489812B2|2002-06-07|2009-02-10|Dynamic Digital Depth Research Pty Ltd.|Conversion and encoding techniques| US7623115B2|2002-07-27|2009-11-24|Sony Computer Entertainment Inc.|Method and apparatus for light input device| US7874917B2|2003-09-15|2011-01-25|Sony Computer Entertainment Inc.|Methods and systems for enabling depth and direction detection when interfacing with a computer program| US7646372B2|2003-09-15|2010-01-12|Sony Computer Entertainment Inc.|Methods and systems for enabling direction detection when interfacing with a computer program| US7151530B2|2002-08-20|2006-12-19|Canesta, Inc.|System and method for determining an input selected by a user through a virtual interface| US7386799B1|2002-11-21|2008-06-10|Forterra Systems, Inc.|Cinematic techniques in avatar-centric communication during a multi-user online simulation| US7576727B2|2002-12-13|2009-08-18|Matthew Bell|Interactive directed light/sound system| JP4235729B2|2003-02-03|2009-03-11|国立大学法人静岡大学|Distance image sensor| GB0306875D0|2003-03-25|2003-04-30|British Telecomm|Apparatus and method for generating behavior in an object| EP1477924B1|2003-03-31|2007-05-02|HONDA MOTOR CO., Ltd.|Gesture recognition apparatus, method and program| WO2004107266A1|2003-05-29|2004-12-09|Honda Motor Co., Ltd.|Visual tracking using depth data| US8072470B2|2003-05-29|2011-12-06|Sony Computer Entertainment Inc.|System and method for providing a real-time three-dimensional interactive environment| US7620202B2|2003-06-12|2009-11-17|Honda Motor Co., Ltd.|Target orientation estimation using depth sensing| US7883415B2|2003-09-15|2011-02-08|Sony Computer Entertainment Inc.|Method and apparatus for adjusting a view of a scene being displayed according to tracked head motion| KR20050033918A|2003-10-07|2005-04-14|황후|Electronic shopping system and the method which use an artificial intelligence avatar| WO2005041579A2|2003-10-24|2005-05-06|Reactrix Systems, Inc.|Method and system for processing captured image information in an interactive video display system| JP3847753B2|2004-01-30|2006-11-22|株式会社ソニー・コンピュータエンタテインメント|Image processing apparatus, image processing method, recording medium, computer program, semiconductor device| JP2005323340A|2004-04-07|2005-11-17|Matsushita Electric Ind Co Ltd|Communication terminal and communication method| CN100573548C|2004-04-15|2009-12-23|格斯图尔泰克股份有限公司|The method and apparatus of tracking bimanual movements| US7308112B2|2004-05-14|2007-12-11|Honda Motor Co., Ltd.|Sign based human-machine interaction| JP2005352893A|2004-06-11|2005-12-22|Nippon Telegr & Teleph Corp <Ntt>|Communication terminal and communication program| US7704135B2|2004-08-23|2010-04-27|Harrison Jr Shelton E|Integrated game system, method, and device| US7991220B2|2004-09-01|2011-08-02|Sony Computer Entertainment Inc.|Augmented reality game system using identification information to display a virtual object in association with a position of a real object| EP1645944B1|2004-10-05|2012-08-15|Sony France S.A.|A content-management interface| JP4449723B2|2004-12-08|2010-04-14|ソニー株式会社|Image processing apparatus, image processing method, and program| KR20060070280A|2004-12-20|2006-06-23|한국전자통신연구원|Apparatus and its method of user interface using hand gesture recognition| HUE049974T2|2005-01-07|2020-11-30|Qualcomm Inc|Detecting and tracking objects in images| US7379566B2|2005-01-07|2008-05-27|Gesturetek, Inc.|Optical flow based tilt sensor| US7430312B2|2005-01-07|2008-09-30|Gesturetek, Inc.|Creating 3D images of objects by illuminating with infrared patterns| US8009871B2|2005-02-08|2011-08-30|Microsoft Corporation|Method and system to segment depth images and to detect shapes in three-dimensionally acquired data| US7598942B2|2005-02-08|2009-10-06|Oblong Industries, Inc.|System and method for gesture based control system| KR100688743B1|2005-03-11|2007-03-02|삼성전기주식회사|Manufacturing method of PCB having multilayer embedded passive-chips| US7317836B2|2005-03-17|2008-01-08|Honda Motor Co., Ltd.|Pose estimation based on critical point analysis| WO2006124935A2|2005-05-17|2006-11-23|Gesturetek, Inc.|Orientation-sensitive signal output| AT412882T|2005-08-12|2008-11-15|Mesa Imaging Ag|HIGHLY SENSITIVE, FAST PIXEL FOR APPLICATION IN AN IMAGE SENSOR| KR100713473B1|2005-08-17|2007-04-30|삼성전자주식회사|Method for game function in wireless terminal| US20080026838A1|2005-08-22|2008-01-31|Dunstan James E|Multi-player non-role-playing virtual world games: method for two-way interaction between participants and multi-player virtual world games| US7450736B2|2005-10-28|2008-11-11|Honda Motor Co., Ltd.|Monocular tracking of 3D human motion with a coordinated mixture of factor analyzers| GB2431717A|2005-10-31|2007-05-02|Sony Uk Ltd|Scene analysis| TWI311067B|2005-12-27|2009-06-21|Ind Tech Res Inst|Method and apparatus of interactive gaming with emotion perception ability| US7433024B2|2006-02-27|2008-10-07|Prime Sense Ltd.|Range mapping using speckle decorrelation| WO2007098560A1|2006-03-03|2007-09-07|The University Of Southern Queensland|An emotion recognition system and method| JP2007256502A|2006-03-22|2007-10-04|Yamaha Corp|Performance data remote communication system, and program for implementing control method thereof| US8766983B2|2006-05-07|2014-07-01|Sony Computer Entertainment Inc.|Methods and systems for processing an interchange of real time effects during video communication| US7721207B2|2006-05-31|2010-05-18|Sony Ericsson Mobile Communications Ab|Camera based control| US7701439B2|2006-07-13|2010-04-20|Northrop Grumman Corporation|Gesture recognition simulation system and method| US20090221368A1|2007-11-28|2009-09-03|Ailive Inc.,|Method and system for creating a shared game space for a networked game| US8395658B2|2006-09-07|2013-03-12|Sony Computer Entertainment Inc.|Touch screen-like user interface that does not require actual touching| JP5395323B2|2006-09-29|2014-01-22|ブレインビジョン株式会社|Solid-state image sensor| US8026918B1|2006-11-22|2011-09-27|Aol Inc.|Controlling communications with proximate avatars in virtual world environment| US20080134102A1|2006-12-05|2008-06-05|Sony Ericsson Mobile Communications Ab|Method and system for detecting movement of an object| US8351646B2|2006-12-21|2013-01-08|Honda Motor Co., Ltd.|Human pose estimation and tracking using label assignment| US7412077B2|2006-12-29|2008-08-12|Motorola, Inc.|Apparatus and methods for head pose estimation and head gesture detection| GB0703974D0|2007-03-01|2007-04-11|Sony Comp Entertainment Europe|Entertainment device| US20080215994A1|2007-03-01|2008-09-04|Phil Harrison|Virtual world avatar control, interactivity and communication interactive messaging| US7729530B2|2007-03-03|2010-06-01|Sergey Antonov|Method and apparatus for 3-D data input to a personal computer with a multimedia oriented operating system| JP2010531478A|2007-04-26|2010-09-24|フォードグローバルテクノロジーズ、リミテッドライアビリティカンパニー|Emotional advice system and method| US20090002178A1|2007-06-29|2009-01-01|Microsoft Corporation|Dynamic mood sensing| US7852262B2|2007-08-16|2010-12-14|Cybernet Systems Corporation|Wireless mobile indoor/outdoor tracking system| CN101422656B|2007-10-29|2012-03-14|英属维京群岛速位互动股份有限公司|Electric game operation device capable of sensing human action| US9292092B2|2007-10-30|2016-03-22|Hewlett-Packard Development Company, L.P.|Interactive display system with collaborative gesture detection| GB2455316B|2007-12-04|2012-08-15|Sony Corp|Image processing apparatus and method| KR100884467B1|2007-12-17|2009-02-20|에스케이 텔레콤주식회사|Method for high speed communication, and terminal therefor| US8149210B2|2007-12-31|2012-04-03|Microsoft International Holdings B.V.|Pointing device and method| CN101254344B|2008-04-18|2010-06-16|李刚|Game device of field orientation corresponding with display screen dot array in proportion and method| CN201254344Y|2008-08-20|2009-06-10|中国农业科学院草原研究所|Plant specimens and seed storage| US20100123776A1|2008-11-18|2010-05-20|Kimberly-Clark Worldwide, Inc.|System and method for observing an individual's reaction to their environment| US8806337B2|2009-04-28|2014-08-12|International Business Machines Corporation|System and method for representation of avatars via personal and group perception, and conditional manifestation of attributes| US8390680B2|2009-07-09|2013-03-05|Microsoft Corporation|Visual representation expression based on player expression|US8156054B2|2008-12-04|2012-04-10|At&T Intellectual Property I, L.P.|Systems and methods for managing interactions between an individual and an entity| US10482428B2|2009-03-10|2019-11-19|Samsung Electronics Co., Ltd.|Systems and methods for presenting metaphors| US9489039B2|2009-03-27|2016-11-08|At&T Intellectual Property I, L.P.|Systems and methods for presenting intermediaries| US8390680B2|2009-07-09|2013-03-05|Microsoft Corporation|Visual representation expression based on player expression| US20120120029A1|2009-07-23|2012-05-17|Mccarthy John P|Display to determine gestures| CN102024448A|2009-09-11|2011-04-20|鸿富锦精密工业(深圳)有限公司|System and method for adjusting image| KR101623007B1|2009-11-11|2016-05-20|엘지전자 주식회사|Displaying device and control method thereof| US20120023201A1|2010-07-26|2012-01-26|Atlas Advisory Partners, Llc|Unified Content Delivery Platform| KR20120024247A|2010-09-06|2012-03-14|삼성전자주식회사|Method for operating a mobile device by recognizing a user gesture and the mobile device thereof| US10726861B2|2010-11-15|2020-07-28|Microsoft Technology Licensing, Llc|Semi-private communication in open environments| WO2012125596A2|2011-03-12|2012-09-20|Parshionikar Uday|Multipurpose controller for electronic devices, facial expressions management and drowsiness detection| US8620113B2|2011-04-25|2013-12-31|Microsoft Corporation|Laser diode modes| US8760395B2|2011-05-31|2014-06-24|Microsoft Corporation|Gesture recognition techniques| US8948893B2|2011-06-06|2015-02-03|International Business Machines Corporation|Audio media mood visualization method and system| US9013489B2|2011-06-06|2015-04-21|Microsoft Technology Licensing, Llc|Generation of avatar reflecting player appearance| AU2011203028B1|2011-06-22|2012-03-08|Microsoft Technology Licensing, Llc|Fully automatic dynamic articulated model calibration| US8564684B2|2011-08-17|2013-10-22|Digimarc Corporation|Emotional illumination, and related arrangements| US9628843B2|2011-11-21|2017-04-18|Microsoft Technology Licensing, Llc|Methods for controlling electronic devices using gestures| US8635637B2|2011-12-02|2014-01-21|Microsoft Corporation|User interface presenting an animated avatar performing a media reaction| US9100685B2|2011-12-09|2015-08-04|Microsoft Technology Licensing, Llc|Determining audience state or interest using passive sensor data| US9625993B2|2012-01-11|2017-04-18|Biosense WebsterLtd.|Touch free operation of devices by use of depth sensors| US9931154B2|2012-01-11|2018-04-03|Biosense Webster , Ltd.|Touch free operation of ablator workstation by use of depth sensors| US10702773B2|2012-03-30|2020-07-07|Videx, Inc.|Systems and methods for providing an interactive avatar| US8898687B2|2012-04-04|2014-11-25|Microsoft Corporation|Controlling a media program based on a media reaction| CA2775700C|2012-05-04|2013-07-23|Microsoft Corporation|Determining a future portion of a currently presented media program| US9092757B2|2012-05-09|2015-07-28|Yahoo! Inc.|Methods and systems for personalizing user experience based on attitude prediction| CN104428832B|2012-07-09|2018-06-26|Lg电子株式会社|Speech recognition equipment and its method| WO2014092711A1|2012-12-13|2014-06-19|Empire Technology Development Llc|Gaming scheme using general mood information| US20140215360A1|2013-01-28|2014-07-31|Quadmanage Ltd.|Systems and methods for animated clip generation| US9754154B2|2013-02-15|2017-09-05|Microsoft Technology Licensing, Llc|Identification using depth-based head-detection data| US10171800B2|2013-02-19|2019-01-01|Mirama Service Inc.|Input/output device, input/output program, and input/output method that provide visual recognition of object to add a sense of distance| WO2014128747A1|2013-02-19|2014-08-28|株式会社ブリリアントサービス|I/o device, i/o program, and i/o method| WO2014128752A1|2013-02-19|2014-08-28|株式会社ブリリアントサービス|Display control device, display control program, and display control method| US9906778B2|2013-02-19|2018-02-27|Mirama Service Inc.|Calibration device, calibration program, and calibration method| WO2014128749A1|2013-02-19|2014-08-28|株式会社ブリリアントサービス|Shape recognition device, shape recognition program, and shape recognition method| JP6244643B2|2013-04-15|2017-12-13|オムロン株式会社|Facial expression estimation apparatus, control method, control program, and recording medium| US9355306B2|2013-09-27|2016-05-31|Konica Minolta Laboratory U.S.A., Inc.|Method and system for recognition of abnormal behavior| US9508197B2|2013-11-01|2016-11-29|Microsoft Technology Licensing, Llc|Generating an avatar from real time image data| US20150123901A1|2013-11-04|2015-05-07|Microsoft Corporation|Gesture disambiguation using orientation information| US10180716B2|2013-12-20|2019-01-15|LenovoPte Ltd|Providing last known browsing location cue using movement-oriented biometric data| US9633252B2|2013-12-20|2017-04-25|LenovoPte. Ltd.|Real-time detection of user intention based on kinematics analysis of movement-oriented biometric data| US10529248B2|2014-06-19|2020-01-07|Embraer S.A.|Aircraft pilot training system, method and apparatus for theory, practice and evaluation| US9600743B2|2014-06-27|2017-03-21|International Business Machines Corporation|Directing field of vision based on personal interests| US9471837B2|2014-08-19|2016-10-18|International Business Machines Corporation|Real-time analytics to identify visual objects of interest| US9861882B2|2014-09-05|2018-01-09|Trigger Global Inc.|Augmented reality gaming systems and methods| US9269374B1|2014-10-27|2016-02-23|Mattersight Corporation|Predictive video analytics system and methods| US10110881B2|2014-10-30|2018-10-23|Microsoft Technology Licensing, Llc|Model fitting from raw time-of-flight images| US9535497B2|2014-11-20|2017-01-03|LenovoPte. Ltd.|Presentation of data on an at least partially transparent display based on user focus| WO2016090605A1|2014-12-11|2016-06-16|Intel Corporation|Avatar selection mechanism| EP3410399A1|2014-12-23|2018-12-05|Intel Corporation|Facial gesture driven animation of non-facial features| US9830728B2|2014-12-23|2017-11-28|Intel Corporation|Augmented facial animation| WO2016101124A1|2014-12-23|2016-06-30|Intel Corporation|Sketch selection for rendering 3d model avatar| CN104657828B|2015-02-17|2019-05-24|华为技术有限公司|Digitization and data matching method and code recommended method and relevant apparatus| WO2016161553A1|2015-04-07|2016-10-13|Intel Corporation|Avatar generation and animations| CN104767980B|2015-04-30|2018-05-04|深圳市东方拓宇科技有限公司|A kind of real-time emotion demenstration method, system, device and intelligent terminal| US10176619B2|2015-07-30|2019-01-08|Intel Corporation|Emotion augmented avatar animation| JP2017054241A|2015-09-08|2017-03-16|株式会社東芝|Display control device, method, and program| JP6569452B2|2015-10-08|2019-09-04|富士通株式会社|Image generation system, image generation program, and image generation method| CN105338369A|2015-10-28|2016-02-17|北京七维视觉科技有限公司|Method and apparatus for synthetizing animations in videos in real time| WO2017079731A1|2015-11-06|2017-05-11|Mursion, Inc.|Control system for virtual characters| WO2017083422A1|2015-11-09|2017-05-18|Momeni Ali|Sensor system for collecting gestural data in two-dimensional animation| US10664741B2|2016-01-14|2020-05-26|Samsung Electronics Co., Ltd.|Selecting a behavior of a virtual agent| GB2548154A|2016-03-11|2017-09-13|Sony Computer Entertainment Europe Ltd|Virtual reality| RU168332U1|2016-06-06|2017-01-30|Виталий Витальевич Аверьянов|DEVICE FOR INFLUENCE ON VIRTUAL AUGMENTED REALITY OBJECTS| CN109891362A|2016-10-07|2019-06-14|索尼公司|Information processing unit, information processing method and program| US10963774B2|2017-01-09|2021-03-30|Microsoft Technology Licensing, Llc|Systems and methods for artificial intelligence interface generation, evolution, and/or adjustment| KR102037573B1|2017-03-09|2019-10-29|주식회사 파트너스앤코|Diagnosis data processing apparatus based on interview data and camera, and system thereof| CN110520902A|2017-03-30|2019-11-29|韩国斯诺有限公司|To the method and device of image application dynamic effect| US10311624B2|2017-06-23|2019-06-04|Disney Enterprises, Inc.|Single shot capture to animated vr avatar| US10419716B1|2017-06-28|2019-09-17|Vulcan Technologies Llc|Ad-hoc dynamic capture of an immersive virtual reality experience| JP2019109761A|2017-12-19|2019-07-04|富士通株式会社|Determination program, determination method, and image generation device| US11014242B2|2018-01-26|2021-05-25|Microsoft Technology Licensing, Llc|Puppeteering in augmented reality| US20200019242A1|2018-07-12|2020-01-16|Microsoft Technology Licensing, Llc|Digital personal expression via wearable device| US10636218B2|2018-09-24|2020-04-28|Universal City Studios Llc|Augmented reality for an amusement ride| US11068065B2|2018-11-28|2021-07-20|International Business Machines Corporation|Non-verbal communication tracking and classification| US11247738B2|2019-05-24|2022-02-15|Disney Enterprises, Inc.|Legged high-dexterity self-balancing capable robot actor| CN110215683A|2019-07-11|2019-09-10|龙马智芯(珠海横琴)科技有限公司|A kind of electronic game system of role playing game| WO2021185600A1|2020-03-20|2021-09-23|Sony Group Corporation|System, game console and method for adjusting a virtual environment|
法律状态:
2018-04-10| B25A| Requested transfer of rights approved|Owner name: MICROSOFT TECHNOLOGY LICENSING, LLC (US) | 2019-01-15| B06F| Objections, documents and/or translations needed after an examination request according art. 34 industrial property law| 2019-06-25| B06T| Formal requirements before examination| 2019-10-01| B15K| Others concerning applications: alteration of classification|Free format text: AS CLASSIFICACOES ANTERIORES ERAM: A63F 13/00 , A63F 13/06 Ipc: A63F 13/213 (2014.01), A63F 13/23 (2014.01), A63F | 2019-10-22| B09A| Decision: intention to grant| 2019-12-24| B16A| Patent or certificate of addition of invention granted|Free format text: PRAZO DE VALIDADE: 20 (VINTE) ANOS CONTADOS A PARTIR DE 06/07/2010, OBSERVADAS AS CONDICOES LEGAIS. |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 US12/500,251|US8390680B2|2009-07-09|2009-07-09|Visual representation expression based on player expression| PCT/US2010/041097|WO2011005784A2|2009-07-09|2010-07-06|Visual representation expression based on player expression| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|